Shift Happens: Mixture of Experts based Continual Adaptation in Federated Learning
Shift Happens: Mixture of Experts based Continual Adaptation in Federated Learning
Federated Learning (FL) enables collaborative model training across decentralized clients without sharing raw data, yet faces significant challenges in real-world settings where client data distributions evolve dynamically over time. This paper tackles the critical problem of covariate and label shifts in streaming FL environments, where non-stationary data distributions degrade model performance and require adaptive middleware solutions. We introduce ShiftEx, a shift-aware mixture of experts framework that dynamically creates and trains specialized global models in response to detected distribution shifts using Maximum Mean Discrepancy for covariate shifts. The framework employs a latent memory mechanism for expert reuse and implements facility location-based optimization to jointly minimize covariate mismatch, expert creation costs, and label imbalance. Through theoretical analysis and comprehensive experiments on benchmark datasets, we demonstrate 5.5-12.9 percentage point accuracy improvements and 22-95 % faster adaptation compared to state-of-the-art FL baselines across diverse shift scenarios. The proposed approach offers a scalable, privacy-preserving middleware solution for FL systems operating in non-stationary, real-world conditions while minimizing communication and computational overhead.
Rahul Atul Bhope、K. R. Jayaram、Praveen Venkateswaran、Nalini Venkatasubramanian
计算技术、计算机技术
Rahul Atul Bhope,K. R. Jayaram,Praveen Venkateswaran,Nalini Venkatasubramanian.Shift Happens: Mixture of Experts based Continual Adaptation in Federated Learning[EB/OL].(2025-06-23)[2025-07-16].https://arxiv.org/abs/2506.18789.点此复制
评论