|国家预印本平台
首页|Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings

Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings

Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings

来源:Arxiv_logoArxiv
英文摘要

Deep neural networks for image classification remain vulnerable to adversarial examples -- small, imperceptible perturbations that induce misclassifications. In black-box settings, where only the final prediction is accessible, crafting targeted attacks that aim to misclassify into a specific target class is particularly challenging due to narrow decision regions. Current state-of-the-art methods often exploit the geometric properties of the decision boundary separating a source image and a target image rather than incorporating information from the images themselves. In contrast, we propose Targeted Edge-informed Attack (TEA), a novel attack that utilizes edge information from the target image to carefully perturb it, thereby producing an adversarial image that is closer to the source image while still achieving the desired target classification. Our approach consistently outperforms current state-of-the-art methods across different models in low query settings (nearly 70\% fewer queries are used), a scenario especially relevant in real-world applications with limited queries and black-box access. Furthermore, by efficiently generating a suitable adversarial example, TEA provides an improved target initialization for established geometry-based attacks.

Arjhun Swaminathan、Mete Akgün

计算技术、计算机技术

Arjhun Swaminathan,Mete Akgün.Accelerating Targeted Hard-Label Adversarial Attacks in Low-Query Black-Box Settings[EB/OL].(2025-05-22)[2025-06-07].https://arxiv.org/abs/2505.16313.点此复制

评论