|国家预印本平台
首页|Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers

Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers

Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers

来源:Arxiv_logoArxiv
英文摘要

Machine learning (ML) models are often sensitive to carefully crafted yet seemingly unnoticeable perturbations. Such adversarial examples are considered to be a property of ML models, often associated with their black-box operation and sensitivity to features learned from data. This work examines the adversarial sensitivity of non-learned decision rules, and particularly of iterative optimizers. Our analysis is inspired by the recent developments in deep unfolding, which cast such optimizers as ML models. We show that non-learned iterative optimizers share the sensitivity to adversarial examples of ML models, and that attacking iterative optimizers effectively alters the optimization objective surface in a manner that modifies the minima sought. We then leverage the ability to cast iteration-limited optimizers as ML models to enhance robustness via adversarial training. For a class of proximal gradient optimizers, we rigorously prove how their learning affects adversarial sensitivity. We numerically back our findings, showing the vulnerability of various optimizers, as well as the robustness induced by unfolding and adversarial training.

Elad Sofer、Tomer Shaked、Caroline Chaux、Nir Shlezinger

计算技术、计算机技术

Elad Sofer,Tomer Shaked,Caroline Chaux,Nir Shlezinger.Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers[EB/OL].(2025-04-26)[2025-06-19].https://arxiv.org/abs/2504.19000.点此复制

评论