Federated Learning Clients Clustering with Adaptation to Data Drifts
Federated Learning Clients Clustering with Adaptation to Data Drifts
Federated Learning (FL) trains deep models across edge devices without centralizing raw data, preserving user privacy. However, client heterogeneity slows down convergence and limits global model accuracy. Clustered FL (CFL) mitigates this by grouping clients with similar representations and training a separate model for each cluster. In practice, client data evolves over time, a phenomenon we refer to as data drift, which breaks cluster homogeneity and degrades performance. Data drift can take different forms depending on whether changes occur in the output values, the input features, or the relationship between them. We propose FIELDING, a CFL framework for handling diverse types of data drift with low overhead. FIELDING detects drift at individual clients and performs selective re-clustering to balance cluster quality and model performance, while remaining robust to malicious clients and varying levels of heterogeneity. Experiments show that FIELDING improves final model accuracy by 1.9-5.9% and achieves target accuracy 1.16x-2.23x faster than existing state-of-the-art CFL methods.
Minghao Li、Dmitrii Avdiukhin、Rana Shahout、Nikita Ivkin、Vladimir Braverman、Minlan Yu
计算技术、计算机技术
Minghao Li,Dmitrii Avdiukhin,Rana Shahout,Nikita Ivkin,Vladimir Braverman,Minlan Yu.Federated Learning Clients Clustering with Adaptation to Data Drifts[EB/OL].(2025-06-25)[2025-08-02].https://arxiv.org/abs/2411.01580.点此复制
评论