Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning
Decentralized federated learning (DFL) is inherently vulnerable to data poisoning attacks, as malicious clients can transmit manipulated gradients to neighboring clients. Existing defense methods either reject suspicious gradients per iteration or restart DFL aggregation after excluding all malicious clients. They all neglect the potential benefits that may exist within contributions from malicious clients. In this paper, we propose a novel gradient purification defense, termed GPD, to defend against data poisoning attacks in DFL. It aims to separately mitigate the harm in gradients and retain benefits embedded in model weights, thereby enhancing overall model accuracy. For each benign client in GPD, a recording variable is designed to track historically aggregated gradients from one of its neighbors. It allows benign clients to precisely detect malicious neighbors and mitigate all aggregated malicious gradients at once. Upon mitigation, benign clients optimize model weights using purified gradients. This optimization not only retains previously beneficial components from malicious clients but also exploits canonical contributions from benign clients. We analyze the convergence of GPD, as well as its ability to harvest high accuracy. Extensive experiments demonstrate that, GPD is capable of mitigating data poisoning attacks under both iid and non-iid data distributions. It also significantly outperforms state-of-the-art defense methods in terms of model accuracy.
Bin Li、Xiaoye Miao、Yan Zhang、Jianwei Yin
计算技术、计算机技术
Bin Li,Xiaoye Miao,Yan Zhang,Jianwei Yin.Gradient Purification: Defense Against Poisoning Attack in Decentralized Federated Learning[EB/OL].(2025-07-07)[2025-07-16].https://arxiv.org/abs/2501.04453.点此复制
评论