Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?
Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?
Large language Model (LLM) unlearning, i.e., selectively removing information from LLMs, is vital for responsible model deployment. Differently, LLM knowledge editing aims to modify LLM knowledge instead of removing it. Though editing and unlearning seem to be two distinct tasks, we find there is a tight connection between them. In this paper, we conceptualize unlearning as a special case of editing where information is modified to a refusal or "empty set" $\emptyset$ response, signifying its removal. This paper thus investigates if knowledge editing techniques are strong baselines for LLM unlearning. We evaluate state-of-the-art (SOTA) editing methods (e.g., ROME, MEMIT, GRACE, WISE, and AlphaEdit) against existing unlearning approaches on pretrained and finetuned knowledge. Results show certain editing methods, notably WISE and AlphaEdit, are effective unlearning baselines, especially for pretrained knowledge, and excel in generating human-aligned refusal answers. To better adapt editing methods for unlearning applications, we propose practical recipes including self-improvement and query merging. The former leverages the LLM's own in-context learning ability to craft a more human-aligned unlearning target, and the latter enables ROME and MEMIT to perform well in unlearning longer sample sequences. We advocate for the unlearning community to adopt SOTA editing methods as baselines and explore unlearning from an editing perspective for more holistic LLM memory control.
Zexi Li、Xiangzhu Wang、William F. Shen、Meghdad Kurmanji、Xinchi Qiu、Dongqi Cai、Chao Wu、Nicholas D. Lane
计算技术、计算机技术
Zexi Li,Xiangzhu Wang,William F. Shen,Meghdad Kurmanji,Xinchi Qiu,Dongqi Cai,Chao Wu,Nicholas D. Lane.Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?[EB/OL].(2025-05-26)[2025-06-07].https://arxiv.org/abs/2505.19855.点此复制
评论