|国家预印本平台
首页|When Forgetting Triggers Backdoors: A Clean Unlearning Attack

When Forgetting Triggers Backdoors: A Clean Unlearning Attack

When Forgetting Triggers Backdoors: A Clean Unlearning Attack

来源:Arxiv_logoArxiv
英文摘要

Machine unlearning has emerged as a key component in ensuring ``Right to be Forgotten'', enabling the removal of specific data points from trained models. However, even when the unlearning is performed without poisoning the forget-set (clean unlearning), it can be exploited for stealthy attacks that existing defenses struggle to detect. In this paper, we propose a novel {\em clean} backdoor attack that exploits both the model learning phase and the subsequent unlearning requests. Unlike traditional backdoor methods, during the first phase, our approach injects a weak, distributed malicious signal across multiple classes. The real attack is then activated and amplified by selectively unlearning {\em non-poisoned} samples. This strategy results in a powerful and stealthy novel attack that is hard to detect or mitigate, highlighting critical vulnerabilities in current unlearning mechanisms and highlighting the need for more robust defenses.

Marco Arazzi、Antonino Nocera、Vinod P

计算技术、计算机技术

Marco Arazzi,Antonino Nocera,Vinod P.When Forgetting Triggers Backdoors: A Clean Unlearning Attack[EB/OL].(2025-06-14)[2025-06-23].https://arxiv.org/abs/2506.12522.点此复制

评论