|国家预印本平台
首页|Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning

Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning

Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning

来源:Arxiv_logoArxiv
英文摘要

The delicate equilibrium between user privacy and the ability to unleash the potential of distributed data is an important concern. Federated learning, which enables the training of collaborative models without sharing of data, has emerged as a privacy-centric solution. This approach brings forth security challenges, notably poisoning and backdoor attacks where malicious entities inject corrupted data into the training process, as well as evasion attacks that aim to induce misclassifications at test time. Our research investigates the intersection of adversarial training, a common defense method against evasion attacks, and backdoor attacks within federated learning. We introduce Adversarial Robustness Unhardening (ARU), which is employed by a subset of adversarial clients to intentionally undermine model robustness during federated training, rendering models susceptible to a broader range of evasion attacks. We present extensive experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks. Our results show that ARU can substantially undermine adversarial training's ability to harden models against test-time evasion attacks, and that adversaries employing ARU can even evade robust aggregation defenses that often neutralize poisoning or backdoor attacks.

Taejin Kim、Jiarui Li、Shubhranshu Singh、Nikhil Madaan、Carlee Joe-Wong

计算技术、计算机技术

Taejin Kim,Jiarui Li,Shubhranshu Singh,Nikhil Madaan,Carlee Joe-Wong.Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning[EB/OL].(2025-06-29)[2025-07-16].https://arxiv.org/abs/2310.11594.点此复制

评论