SecureFed: A Two-Phase Framework for Detecting Malicious Clients in Federated Learning
SecureFed: A Two-Phase Framework for Detecting Malicious Clients in Federated Learning
Federated Learning (FL) protects data privacy while providing a decentralized method for training models. However, because of the distributed schema, it is susceptible to adversarial clients that could alter results or sabotage model performance. This study presents SecureFed, a two-phase FL framework for identifying and reducing the impact of such attackers. Phase 1 involves collecting model updates from participating clients and applying a dimensionality reduction approach to identify outlier patterns frequently associated with malicious behavior. Temporary models constructed from the client updates are evaluated on synthetic datasets to compute validation losses and support anomaly scoring. The idea of learning zones is presented in Phase 2, where weights are dynamically routed according to their contribution scores and gradient magnitudes. High-value gradient zones are given greater weight in aggregation and contribute more significantly to the global model, while lower-value gradient zones, which may indicate possible adversarial activity, are gradually removed from training. Until the model converges and a strong defense against poisoning attacks is possible, this training cycle continues Based on the experimental findings, SecureFed considerably improves model resilience without compromising model performance.
Likhitha Annapurna Kavuri、Akshay Mhatre、Akarsh K Nair、Deepti Gupta
计算技术、计算机技术
Likhitha Annapurna Kavuri,Akshay Mhatre,Akarsh K Nair,Deepti Gupta.SecureFed: A Two-Phase Framework for Detecting Malicious Clients in Federated Learning[EB/OL].(2025-06-19)[2025-07-01].https://arxiv.org/abs/2506.16458.点此复制
评论