Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates
Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates
Federated learning (FL) enables decentralized machine learning without sharing raw data, allowing multiple clients to collaboratively learn a global model. However, studies reveal that privacy leakage is possible under commonly adopted FL protocols. In particular, a server with access to client gradients can synthesize data resembling the clients' training data. In this paper, we introduce a novel threat model in FL, named the maliciously curious client, where a client manipulates its own gradients with the goal of inferring private data from peers. This attacker uniquely exploits the strength of a Byzantine adversary, traditionally aimed at undermining model robustness, and repurposes it to facilitate data reconstruction attack. We begin by formally defining this novel client-side threat model and providing a theoretical analysis that demonstrates its ability to achieve significant reconstruction success during FL training. To demonstrate its practical impact, we further develop a reconstruction algorithm that combines gradient inversion with malicious update strategies. Our analysis and experimental results reveal a critical blind spot in FL defenses: both server-side robust aggregation and client-side privacy mechanisms may fail against our proposed attack. Surprisingly, standard server- and client-side defenses designed to enhance robustness or privacy may unintentionally amplify data leakage. Compared to the baseline approach, a mistakenly used defense may instead improve the reconstructed image quality by 10-15%.
Kai Yue、Richeng Jin、Chau-Wai Wong、Huaiyu Dai
计算技术、计算机技术
Kai Yue,Richeng Jin,Chau-Wai Wong,Huaiyu Dai.Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates[EB/OL].(2025-06-12)[2025-07-17].https://arxiv.org/abs/2506.11413.点此复制
评论