|国家预印本平台
首页|Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models

Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models

Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models

来源:Arxiv_logoArxiv
英文摘要

Machine Unlearning (MUL) is crucial for privacy protection and content regulation, yet recent studies reveal that traces of forgotten information persist in unlearned models, enabling adversaries to resurface removed knowledge. Existing verification methods only confirm whether unlearning was executed, failing to detect such residual information leaks. To address this, we introduce the concept of Robust Unlearning, ensuring models are indistinguishable from retraining and resistant to adversarial recovery. To empirically evaluate whether unlearning techniques meet this security standard, we propose the Unlearning Mapping Attack (UMA), a post-unlearning verification framework that actively probes models for forgotten traces using adversarial queries. Extensive experiments on discriminative and generative tasks show that existing unlearning techniques remain vulnerable, even when passing existing verification metrics. By establishing UMA as a practical verification tool, this study sets a new standard for assessing and enhancing machine unlearning security.

Hao Xuan、Xingyu Li

计算技术、计算机技术

Hao Xuan,Xingyu Li.Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models[EB/OL].(2025-04-20)[2025-05-04].https://arxiv.org/abs/2504.14798.点此复制

评论