|国家预印本平台
首页|Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

来源:Arxiv_logoArxiv
英文摘要

This note studies the distributed non-convex optimization problem with non-smooth regularization, which has wide applications in decentralized learning, estimation and control. The objective function is the sum of different local objective functions, which consist of differentiable (possibly non-convex) cost functions and non-smooth convex functions. This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks. Each agent updates local variable estimate by the multi-step consensus operator and the proximal operator. We prove that the generated local variables achieve consensus and converge to the set of critical points with convergence rate $O(1/T)$. Finally, we verify the efficacy of proposed algorithm by numerical simulations.

Jian Sun、Xia Jiang、Jie Chen、Xianlin Zeng

计算技术、计算机技术

Jian Sun,Xia Jiang,Jie Chen,Xianlin Zeng.Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks[EB/OL].(2021-03-03)[2025-08-09].https://arxiv.org/abs/2103.02271.点此复制

评论