NT-ML: Backdoor Defense via Non-target Label Training and Mutual Learning
NT-ML: Backdoor Defense via Non-target Label Training and Mutual Learning
Recent studies have shown that deep neural networks (DNNs) are vulnerable to backdoor attacks, where a designed trigger is injected into the dataset, causing erroneous predictions when activated. In this paper, we propose a novel defense mechanism, Non-target label Training and Mutual Learning (NT-ML), which can successfully restore the poisoned model under advanced backdoor attacks. NT aims to reduce the harm of poisoned data by retraining the model with the outputs of the standard training. At this stage, a teacher model with high accuracy on clean data and a student model with higher confidence in correct prediction on poisoned data are obtained. Then, the teacher and student can learn the strengths from each other through ML to obtain a purified student model. Extensive experiments show that NT-ML can effectively defend against 6 backdoor attacks with a small number of clean samples, and outperforms 5 state-of-the-art backdoor defenses.
Wenjie Huo、Katinka Wolter
计算技术、计算机技术
Wenjie Huo,Katinka Wolter.NT-ML: Backdoor Defense via Non-target Label Training and Mutual Learning[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2508.05404.点此复制
评论