|国家预印本平台
首页|Evaluating the Evaluators: Trust in Adversarial Robustness Tests

Evaluating the Evaluators: Trust in Adversarial Robustness Tests

Evaluating the Evaluators: Trust in Adversarial Robustness Tests

来源:Arxiv_logoArxiv
英文摘要

Despite significant progress in designing powerful adversarial evasion attacks for robustness verification, the evaluation of these methods often remains inconsistent and unreliable. Many assessments rely on mismatched models, unverified implementations, and uneven computational budgets, which can lead to biased results and a false sense of security. Consequently, robustness claims built on such flawed testing protocols may be misleading and give a false sense of security. As a concrete step toward improving evaluation reliability, we present AttackBench, a benchmark framework developed to assess the effectiveness of gradient-based attacks under standardized and reproducible conditions. AttackBench serves as an evaluation tool that ranks existing attack implementations based on a novel optimality metric, which enables researchers and practitioners to identify the most reliable and effective attack for use in subsequent robustness evaluations. The framework enforces consistent testing conditions and enables continuous updates, making it a reliable foundation for robustness verification.

Antonio Emanuele CinÃ、Maura Pintor、Luca Demetrio、Ambra Demontis、Battista Biggio、Fabio Roli

计算技术、计算机技术

Antonio Emanuele CinÃ,Maura Pintor,Luca Demetrio,Ambra Demontis,Battista Biggio,Fabio Roli.Evaluating the Evaluators: Trust in Adversarial Robustness Tests[EB/OL].(2025-07-04)[2025-08-02].https://arxiv.org/abs/2507.03450.点此复制

评论