Distributed Optimization and Learning for Automated Stepsize Selection with Finite Time Coordination
Distributed Optimization and Learning for Automated Stepsize Selection with Finite Time Coordination
Distributed optimization and learning algorithms are designed to operate over large scale networks enabling processing of vast amounts of data effectively and efficiently. One of the main challenges for ensuring a smooth learning process in gradient-based methods is the appropriate selection of a learning stepsize. Most current distributed approaches let individual nodes adapt their stepsizes locally. However, this may introduce stepsize heterogeneity in the network, thus disrupting the learning process and potentially leading to divergence. In this paper, we propose a distributed learning algorithm that incorporates a novel mechanism for automating stepsize selection among nodes. Our main idea relies on implementing a finite time coordination algorithm for eliminating stepsize heterogeneity among nodes. We analyze the operation of our algorithm and we establish its convergence to the optimal solution. We conclude our paper with numerical simulations for a linear regression problem, showcasing that eliminating stepsize heterogeneity enhances convergence speed and accuracy against current approaches.
Apostolos I. Rikos、Nicola Bastianello、Themistoklis Charalambous、Karl H. Johansson
自动化基础理论计算技术、计算机技术
Apostolos I. Rikos,Nicola Bastianello,Themistoklis Charalambous,Karl H. Johansson.Distributed Optimization and Learning for Automated Stepsize Selection with Finite Time Coordination[EB/OL].(2025-08-07)[2025-08-24].https://arxiv.org/abs/2508.05887.点此复制
评论