A Generalized Meta Federated Learning Framework with Theoretical Convergence Guarantees
A Generalized Meta Federated Learning Framework with Theoretical Convergence Guarantees
Meta federated learning (FL) is a personalized variant of FL, where multiple agents collaborate on training an initial shared model without exchanging raw data samples. The initial model should be trained in a way that current or new agents can easily adapt it to their local datasets after one or a few fine-tuning steps, thus improving the model personalization. Conventional meta FL approaches minimize the average loss of agents on the local models obtained after one step of fine-tuning. In practice, agents may need to apply several fine-tuning steps to adapt the global model to their local data, especially under highly heterogeneous data distributions across agents. To this end, we present a generalized framework for the meta FL by minimizing the average loss of agents on their local model after any arbitrary number $\nu$ of fine-tuning steps. For this generalized framework, we present a variant of the well-known federated averaging (FedAvg) algorithm and conduct a comprehensive theoretical convergence analysis to characterize the convergence speed as well as behavior of the meta loss functions in both the exact and approximated cases. Our experiments on real-world datasets demonstrate superior accuracy and faster convergence for the proposed scheme compared to conventional approaches.
Mohammad Vahid Jamali、Hamid Saber、Jung Hyun Bae
计算技术、计算机技术
Mohammad Vahid Jamali,Hamid Saber,Jung Hyun Bae.A Generalized Meta Federated Learning Framework with Theoretical Convergence Guarantees[EB/OL].(2025-04-30)[2025-05-22].https://arxiv.org/abs/2504.21327.点此复制
评论