OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics (2506.12618v1)
Abstract: Robust unlearning is crucial for safely deploying LLMs in environments where data privacy, model safety, and regulatory compliance must be ensured. Yet the task is inherently challenging, partly due to difficulties in reliably measuring whether unlearning has truly occurred. Moreover, fragmentation in current methodologies and inconsistent evaluation metrics hinder comparative analysis and reproducibility. To unify and accelerate research efforts, we introduce OpenUnlearning, a standardized and extensible framework designed explicitly for benchmarking both LLM unlearning methods and metrics. OpenUnlearning integrates 9 unlearning algorithms and 16 diverse evaluations across 3 leading benchmarks (TOFU, MUSE, and WMDP) and also enables analyses of forgetting behaviors across 450+ checkpoints we publicly release. Leveraging OpenUnlearning, we propose a novel meta-evaluation benchmark focused specifically on assessing the faithfulness and robustness of evaluation metrics themselves. We also benchmark diverse unlearning methods and provide a comparative analysis against an extensive evaluation suite. Overall, we establish a clear, community-driven pathway toward rigorous development in LLM unlearning research.
- Vineeth Dorna (4 papers)
- Anmol Mekala (6 papers)
- Wenlong Zhao (18 papers)
- Andrew McCallum (132 papers)
- Zachary C. Lipton (137 papers)
- J. Zico Kolter (151 papers)
- Pratyush Maini (19 papers)