One-Shot Federated Learning
One-Shot Federated Learning
We present one-shot federated learning, where a central server learns a global model over a network of federated devices in a single round of communication. Our approach - drawing on ensemble learning and knowledge aggregation - achieves an average relative gain of 51.5% in AUC over local baselines and comes within 90.1% of the (unattainable) global ideal. We discuss these methods and identify several promising directions of future work.
Neel Guha、Ameet Talwalkar、Virginia Smith
计算技术、计算机技术
Neel Guha,Ameet Talwalkar,Virginia Smith.One-Shot Federated Learning[EB/OL].(2019-02-28)[2025-05-01].https://arxiv.org/abs/1902.11175.点此复制
评论