|国家预印本平台
首页|SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

来源:Arxiv_logoArxiv
英文摘要

Federated learning is a promising approach for training machine learning models while preserving data privacy. However, its distributed nature makes it vulnerable to backdoor attacks, particularly in NLP tasks, where related research remains limited. This paper introduces SDBA, a novel backdoor attack mechanism designed for NLP tasks in federated learning environments. Through a systematic analysis across LSTM and GPT-2 models, we identify the most vulnerable layers for backdoor injection and achieve both stealth and long-lasting durability by applying layer-wise gradient masking and top-k% gradient masking. Also, to evaluate the task generalizability of SDBA, we additionally conduct experiments on the T5 model. Experiments on next-token prediction, sentiment analysis, and question answering tasks show that SDBA outperforms existing backdoors in terms of durability and effectively bypasses representative defense mechanisms, demonstrating notable performance in transformer-based models such as GPT-2. These results highlight the urgent need for robust defense strategies in NLP-based federated learning systems.

Minyeong Choe、Cheolhee Park、Changho Seo、Hyunil Kim

10.1109/TDSC.2025.3593640

计算技术、计算机技术

Minyeong Choe,Cheolhee Park,Changho Seo,Hyunil Kim.SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning[EB/OL].(2025-07-30)[2025-08-06].https://arxiv.org/abs/2409.14805.点此复制

评论