|国家预印本平台
首页|BURN: Backdoor Unlearning via Adversarial Boundary Analysis

BURN: Backdoor Unlearning via Adversarial Boundary Analysis

BURN: Backdoor Unlearning via Adversarial Boundary Analysis

来源:Arxiv_logoArxiv
英文摘要

Backdoor unlearning aims to remove backdoor-related information while preserving the model's original functionality. However, existing unlearning methods mainly focus on recovering trigger patterns but fail to restore the correct semantic labels of poison samples. This limitation prevents them from fully eliminating the false correlation between the trigger pattern and the target label. To address this, we leverage boundary adversarial attack techniques, revealing two key observations. First, poison samples exhibit significantly greater distances from decision boundaries compared to clean samples, indicating they require larger adversarial perturbations to change their predictions. Second, while adversarial predicted labels for clean samples are uniformly distributed, those for poison samples tend to revert to their original correct labels. Moreover, the features of poison samples restore to closely resemble those of corresponding clean samples after adding adversarial perturbations. Building upon these insights, we propose Backdoor Unlearning via adversaRial bouNdary analysis (BURN), a novel defense framework that integrates false correlation decoupling, progressive data refinement, and model purification. In the first phase, BURN employs adversarial boundary analysis to detect poisoned samples based on their abnormal adversarial boundary distances, then restores their correct semantic labels for fine-tuning. In the second phase, it employs a feedback mechanism that tracks prediction discrepancies between the original backdoored model and progressively sanitized models, guiding both dataset refinement and model purification. Extensive evaluations across multiple datasets, architectures, and seven diverse backdoor attack types confirm that BURN effectively removes backdoor threats while maintaining the model's original performance.

Yanghao Su、Jie Zhang、Yiming Li、Tianwei Zhang、Qing Guo、Weiming Zhang、Nenghai Yu、Nils Lukas、Wenbo Zhou

计算技术、计算机技术

Yanghao Su,Jie Zhang,Yiming Li,Tianwei Zhang,Qing Guo,Weiming Zhang,Nenghai Yu,Nils Lukas,Wenbo Zhou.BURN: Backdoor Unlearning via Adversarial Boundary Analysis[EB/OL].(2025-07-14)[2025-08-02].https://arxiv.org/abs/2507.10491.点此复制

评论