|国家预印本平台
首页|From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization

From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization

From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization

来源:Arxiv_logoArxiv
英文摘要

Recent unlearning methods for LLMs are vulnerable to relearning attacks: knowledge believed-to-be-unlearned re-emerges by fine-tuning on a small set of (even seemingly-unrelated) examples. We study this phenomenon in a controlled setting for example-level unlearning in vision classifiers. We make the surprising discovery that forget-set accuracy can recover from around 50% post-unlearning to nearly 100% with fine-tuning on just the retain set -- i.e., zero examples of the forget set. We observe this effect across a wide variety of unlearning methods, whereas for a model retrained from scratch excluding the forget set (gold standard), the accuracy remains at 50%. We observe that resistance to relearning attacks can be predicted by weight-space properties, specifically, $L_2$-distance and linear mode connectivity between the original and the unlearned model. Leveraging this insight, we propose a new class of methods that achieve state-of-the-art resistance to relearning attacks.

Shoaib Ahmed Siddiqui、Adrian Weller、David Krueger、Gintare Karolina Dziugaite、Michael Curtis Mozer、Eleni Triantafillou

计算技术、计算机技术

Shoaib Ahmed Siddiqui,Adrian Weller,David Krueger,Gintare Karolina Dziugaite,Michael Curtis Mozer,Eleni Triantafillou.From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization[EB/OL].(2025-05-28)[2025-07-21].https://arxiv.org/abs/2505.22310.点此复制

评论