|国家预印本平台
首页|The Stochastic Multi-Proximal Method for Nonsmooth Optimization

The Stochastic Multi-Proximal Method for Nonsmooth Optimization

The Stochastic Multi-Proximal Method for Nonsmooth Optimization

来源:Arxiv_logoArxiv
英文摘要

Stochastic gradient descent type methods are ubiquitous in machine learning, but they are only applicable to the optimization of differentiable functions. Proximal algorithms are more general and applicable to nonsmooth functions. We propose a new stochastic and variance-reduced algorithm, the Stochastic Multi-Proximal Method (SMPM), in which the proximity operators of a (possibly empty) random subset of functions are called at every iteration, according to an arbitrary sampling distribution. Several existing algorithms, including Point-SAGA (2016), Proxskip (2022) and RandProx-Minibatch (2023) are recovered as particular cases. We derive linear convergence results in presence of strong convexity and smoothness or similarity of the functions. We prove convergence in the general convex case and accelerated O(1/t2) convergence with varying stepsizes in presence of strong convexity solely. Our results are new even for the above special cases. Moreover, we show an application to distributed optimization with compressed communication, outperforming existing methods.

Laurent Condat、Elnur Gasanov、Peter Richtárik

计算技术、计算机技术

Laurent Condat,Elnur Gasanov,Peter Richtárik.The Stochastic Multi-Proximal Method for Nonsmooth Optimization[EB/OL].(2025-05-18)[2025-06-14].https://arxiv.org/abs/2505.12409.点此复制

评论