Black-box Adversarial Attacks on CNN-based SLAM Algorithms
Black-box Adversarial Attacks on CNN-based SLAM Algorithms
Continuous advancements in deep learning have led to significant progress in feature detection, resulting in enhanced accuracy in tasks like Simultaneous Localization and Mapping (SLAM). Nevertheless, the vulnerability of deep neural networks to adversarial attacks remains a challenge for their reliable deployment in applications, such as navigation of autonomous agents. Even though CNN-based SLAM algorithms are a growing area of research there is a notable absence of a comprehensive presentation and examination of adversarial attacks targeting CNN-based feature detectors, as part of a SLAM system. Our work introduces black-box adversarial perturbations applied to the RGB images fed into the GCN-SLAM algorithm. Our findings on the TUM dataset [30] reveal that even attacks of moderate scale can lead to tracking failure in as many as 76% of the frames. Moreover, our experiments highlight the catastrophic impact of attacking depth instead of RGB input images on the SLAM system.
Maria Rafaela Gkeka、Bowen Sun、Evgenia Smirni、Christos D. Antonopoulos、Spyros Lalis、Nikolaos Bellas
计算技术、计算机技术
Maria Rafaela Gkeka,Bowen Sun,Evgenia Smirni,Christos D. Antonopoulos,Spyros Lalis,Nikolaos Bellas.Black-box Adversarial Attacks on CNN-based SLAM Algorithms[EB/OL].(2025-05-30)[2025-06-22].https://arxiv.org/abs/2505.24654.点此复制
评论