|国家预印本平台
首页|Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems

Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems

Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems

来源:Arxiv_logoArxiv
英文摘要

Autonomous driving systems (ADS) increasingly rely on deep learning-based perception models, which remain vulnerable to adversarial attacks. In this paper, we revisit adversarial attacks and defense methods, focusing on road sign recognition and lead object detection and prediction (e.g., relative distance). Using a Level-2 production ADS, OpenPilot by Comma$.$ai, and the widely adopted YOLO model, we systematically examine the impact of adversarial perturbations and assess defense techniques, including adversarial training, image processing, contrastive learning, and diffusion models. Our experiments highlight both the strengths and limitations of these methods in mitigating complex attacks. Through targeted evaluations of model robustness, we aim to provide deeper insights into the vulnerabilities of ADS perception systems and contribute guidance for developing more resilient defense strategies.

Nafis S Munir、Xiangwei Zhou、Xugui Zhou、Yuhong Wang、Cheng Chen

自动化技术、自动化技术设备计算技术、计算机技术

Nafis S Munir,Xiangwei Zhou,Xugui Zhou,Yuhong Wang,Cheng Chen.Revisiting Adversarial Perception Attacks and Defense Methods on Autonomous Driving Systems[EB/OL].(2025-05-13)[2025-06-22].https://arxiv.org/abs/2505.11532.点此复制

评论