|国家预印本平台
首页|OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics

OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics

OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics

来源:Arxiv_logoArxiv
英文摘要

Robust unlearning is crucial for safely deploying large language models (LLMs) in environments where data privacy, model safety, and regulatory compliance must be ensured. Yet the task is inherently challenging, partly due to difficulties in reliably measuring whether unlearning has truly occurred. Moreover, fragmentation in current methodologies and inconsistent evaluation metrics hinder comparative analysis and reproducibility. To unify and accelerate research efforts, we introduce OpenUnlearning, a standardized and extensible framework designed explicitly for benchmarking both LLM unlearning methods and metrics. OpenUnlearning integrates 9 unlearning algorithms and 16 diverse evaluations across 3 leading benchmarks (TOFU, MUSE, and WMDP) and also enables analyses of forgetting behaviors across 450+ checkpoints we publicly release. Leveraging OpenUnlearning, we propose a novel meta-evaluation benchmark focused specifically on assessing the faithfulness and robustness of evaluation metrics themselves. We also benchmark diverse unlearning methods and provide a comparative analysis against an extensive evaluation suite. Overall, we establish a clear, community-driven pathway toward rigorous development in LLM unlearning research.

Vineeth Dorna、Anmol Mekala、Wenlong Zhao、Andrew McCallum、Zachary C. Lipton、J. Zico Kolter、Pratyush Maini

计算技术、计算机技术

Vineeth Dorna,Anmol Mekala,Wenlong Zhao,Andrew McCallum,Zachary C. Lipton,J. Zico Kolter,Pratyush Maini.OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics[EB/OL].(2025-06-14)[2025-07-16].https://arxiv.org/abs/2506.12618.点此复制

评论