Fool the Stoplight: Realistic Adversarial Patch Attacks on Traffic Light Detectors
Fool the Stoplight: Realistic Adversarial Patch Attacks on Traffic Light Detectors
Realistic adversarial attacks on various camera-based perception tasks of autonomous vehicles have been successfully demonstrated so far. However, only a few works considered attacks on traffic light detectors. This work shows how CNNs for traffic light detection can be attacked with printed patches. We propose a threat model, where each instance of a traffic light is attacked with a patch placed under it, and describe a training strategy. We demonstrate successful adversarial patch attacks in universal settings. Our experiments show realistic targeted red-to-green label-flipping attacks and attacks on pictogram classification. Finally, we perform a real-world evaluation with printed patches and demonstrate attacks in the lab settings with a mobile traffic light for construction sites and in a test area with stationary traffic lights. Our code is available at https://github.com/KASTEL-MobilityLab/attacks-on-traffic-light-detection.
Svetlana Pavlitska、Jamie Robb、Nikolai Polley、Melih Yazgan、J. Marius Z?llner
自动化技术、自动化技术设备
Svetlana Pavlitska,Jamie Robb,Nikolai Polley,Melih Yazgan,J. Marius Z?llner.Fool the Stoplight: Realistic Adversarial Patch Attacks on Traffic Light Detectors[EB/OL].(2025-06-05)[2025-06-25].https://arxiv.org/abs/2506.04823.点此复制
评论