|国家预印本平台
首页|On the existence of consistent adversarial attacks in high-dimensional linear classification

On the existence of consistent adversarial attacks in high-dimensional linear classification

On the existence of consistent adversarial attacks in high-dimensional linear classification

来源:Arxiv_logoArxiv
英文摘要

What fundamentally distinguishes an adversarial attack from a misclassification due to limited model expressivity or finite data? In this work, we investigate this question in the setting of high-dimensional binary classification, where statistical effects due to limited data availability play a central role. We introduce a new error metric that precisely capture this distinction, quantifying model vulnerability to consistent adversarial attacks -- perturbations that preserve the ground-truth labels. Our main technical contribution is an exact and rigorous asymptotic characterization of these metrics in both well-specified models and latent space models, revealing different vulnerability patterns compared to standard robust error measures. The theoretical results demonstrate that as models become more overparameterized, their vulnerability to label-preserving perturbations grows, offering theoretical insight into the mechanisms underlying model sensitivity to adversarial attacks.

Matteo Vilucchio、Lenka Zdeborová、Bruno Loureiro

计算技术、计算机技术

Matteo Vilucchio,Lenka Zdeborová,Bruno Loureiro.On the existence of consistent adversarial attacks in high-dimensional linear classification[EB/OL].(2025-06-14)[2025-07-09].https://arxiv.org/abs/2506.12454.点此复制

评论