CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack
CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack
Projector-based adversarial attack aims to project carefully designed light patterns (i.e., adversarial projections) onto scenes to deceive deep image classifiers. It has potential applications in privacy protection and the development of more robust classifiers. However, existing approaches primarily focus on individual classifiers and fixed camera poses, often neglecting the complexities of multi-classifier systems and scenarios with varying camera poses. This limitation reduces their effectiveness when introducing new classifiers or camera poses. In this paper, we introduce Classifier-Agnostic Projector-Based Adversarial Attack (CAPAA) to address these issues. First, we develop a novel classifier-agnostic adversarial loss and optimization framework that aggregates adversarial and stealthiness loss gradients from multiple classifiers. Then, we propose an attention-based gradient weighting mechanism that concentrates perturbations on regions of high classification activation, thereby improving the robustness of adversarial projections when applied to scenes with varying camera poses. Our extensive experimental evaluations demonstrate that CAPAA achieves both a higher attack success rate and greater stealthiness compared to existing baselines. Codes are available at: https://github.com/ZhanLiQxQ/CAPAA.
Zhan Li、Mingyu Zhao、Xin Dong、Haibin Ling、Bingyao Huang
计算技术、计算机技术
Zhan Li,Mingyu Zhao,Xin Dong,Haibin Ling,Bingyao Huang.CAPAA: Classifier-Agnostic Projector-Based Adversarial Attack[EB/OL].(2025-06-01)[2025-07-22].https://arxiv.org/abs/2506.00978.点此复制
评论