|国家预印本平台
首页|Does simple trump complex? Comparing strategies for adversarial robustness in DNNs

Does simple trump complex? Comparing strategies for adversarial robustness in DNNs

Does simple trump complex? Comparing strategies for adversarial robustness in DNNs

来源:Arxiv_logoArxiv
英文摘要

Deep Neural Networks (DNNs) have shown substantial success in various applications but remain vulnerable to adversarial attacks. This study aims to identify and isolate the components of two different adversarial training techniques that contribute most to increased adversarial robustness, particularly through the lens of margins in the input space -- the minimal distance between data points and decision boundaries. Specifically, we compare two methods that maximize margins: a simple approach which modifies the loss function to increase an approximation of the margin, and a more complex state-of-the-art method (Dynamics-Aware Robust Training) which builds upon this approach. Using a VGG-16 model as our base, we systematically isolate and evaluate individual components from these methods to determine their relative impact on adversarial robustness. We assess the effect of each component on the model's performance under various adversarial attacks, including AutoAttack and Projected Gradient Descent (PGD). Our analysis on the CIFAR-10 dataset reveals which elements most effectively enhance adversarial robustness, providing insights for designing more robust DNNs.

William Brooks、Marelie H. Davel、Coenraad Mouton

计算技术、计算机技术

William Brooks,Marelie H. Davel,Coenraad Mouton.Does simple trump complex? Comparing strategies for adversarial robustness in DNNs[EB/OL].(2025-08-25)[2025-09-05].https://arxiv.org/abs/2508.18019.点此复制

评论