Variance-Based Defense Against Blended Backdoor Attacks
Variance-Based Defense Against Blended Backdoor Attacks
Backdoor attacks represent a subtle yet effective class of cyberattacks targeting AI models, primarily due to their stealthy nature. The model behaves normally on clean data but exhibits malicious behavior only when the attacker embeds a specific trigger into the input. This attack is performed during the training phase, where the adversary corrupts a small subset of the training data by embedding a pattern and modifying the labels to a chosen target. The objective is to make the model associate the pattern with the target label while maintaining normal performance on unaltered data. Several defense mechanisms have been proposed to sanitize training data-sets. However, these methods often rely on the availability of a clean dataset to compute statistical anomalies, which may not always be feasible in real-world scenarios where datasets can be unavailable or compromised. To address this limitation, we propose a novel defense method that trains a model on the given dataset, detects poisoned classes, and extracts the critical part of the attack trigger before identifying the poisoned instances. This approach enhances explainability by explicitly revealing the harmful part of the trigger. The effectiveness of our method is demonstrated through experimental evaluations on well-known image datasets and comparative analysis against three state-of-the-art algorithms: SCAn, ABL, and AGPD.
Sujeevan Aseervatham、Achraf Kerzazi、Younès Bennani
计算技术、计算机技术
Sujeevan Aseervatham,Achraf Kerzazi,Younès Bennani.Variance-Based Defense Against Blended Backdoor Attacks[EB/OL].(2025-06-02)[2025-06-17].https://arxiv.org/abs/2506.01444.点此复制
评论