|国家预印本平台
首页|Personalizing Federated Learning with Over-the-Air Computations

Personalizing Federated Learning with Over-the-Air Computations

Personalizing Federated Learning with Over-the-Air Computations

来源:Arxiv_logoArxiv
英文摘要

Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner. Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server. But the training efficiency is often throttled by challenges arising from limited communication and data heterogeneity. This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck. Additionally, we leverage a bi-level optimization framework to personalize the federated learning model so as to cope with the data heterogeneity issue. As a result, it enhances the generalization and robustness of each client's local model. We elaborate on the model training procedure and its advantages over conventional frameworks. We provide a convergence analysis that theoretically demonstrates the training efficiency. We also conduct extensive experiments to validate the efficacy of the proposed framework.

Howard H. Yang、Zihan Chen、Tony Q. S. Quek、Zeshen Li

无线通信通信计算技术、计算机技术

Howard H. Yang,Zihan Chen,Tony Q. S. Quek,Zeshen Li.Personalizing Federated Learning with Over-the-Air Computations[EB/OL].(2023-02-24)[2025-05-22].https://arxiv.org/abs/2302.12509.点此复制

评论