|国家预印本平台
首页|Federated ADMM from Bayesian Duality

Federated ADMM from Bayesian Duality

Federated ADMM from Bayesian Duality

来源:Arxiv_logoArxiv
英文摘要

ADMM is a popular method for federated deep learning which originated in the 1970s and, even though many new variants of it have been proposed since then, its core algorithmic structure has remained unchanged. Here, we take a major departure from the old structure and present a fundamentally new way to derive and extend federated ADMM. We propose to use a structure called Bayesian Duality which exploits a duality of the posterior distributions obtained by solving a variational-Bayesian reformulation of the original problem. We show that this naturally recovers the original ADMM when isotropic Gaussian posteriors are used, and yields non-trivial extensions for other posterior forms. For instance, full-covariance Gaussians lead to Newton-like variants of ADMM, while diagonal covariances result in a cheap Adam-like variant. This is especially useful to handle heterogeneity in federated deep learning, giving up to 7% accuracy improvements over recent baselines. Our work opens a new Bayesian path to improve primal-dual methods.

Thomas M?llenhoff、Siddharth Swaroop、Finale Doshi-Velez、Mohammad Emtiyaz Khan

计算技术、计算机技术

Thomas M?llenhoff,Siddharth Swaroop,Finale Doshi-Velez,Mohammad Emtiyaz Khan.Federated ADMM from Bayesian Duality[EB/OL].(2025-06-16)[2025-06-28].https://arxiv.org/abs/2506.13150.点此复制

评论