|国家预印本平台
首页|A Stochastic Gradient Descent Method with Global Convergence for Minimizing Nearly Convex Functions

A Stochastic Gradient Descent Method with Global Convergence for Minimizing Nearly Convex Functions

A Stochastic Gradient Descent Method with Global Convergence for Minimizing Nearly Convex Functions

来源:Arxiv_logoArxiv
英文摘要

This paper proposes a stochastic gradient descent method with an adaptive Gaussian noise term for minimizing nonconvex differentiable functions. The noise term in the algorithm, independent of the gradient, is determined by the difference between the function value at the current step and a lower bound estimate of the optimal value. In both probability space and state space, our theoretical analysis shows that for a class of nonconvex functions, represented by nearly convex functions, the proposed algorithm converges linearly to a certain neighborhood of the global optimal solution whose diameter depends on the variance of the gradient and the deviation between the estimated lower bound and the optimal value. Specifically, when full gradient information is utilized and the sharp lower bound of the objective function is available, the algorithm converges linearly to the global optimal solution. Furthermore, we propose a double-loop method that alternatively updates the lower bound estimate of the optimal value and the sequence, achieving the convergence to a neighborhood of the global optimal solution depending only on the variance of the gradient, provided that the lower bound estimate is asymptotically accurate. Numerical experiments on several concrete problems demonstrate the effectiveness of the proposed algorithm and validate the theoretical findings.

Chenglong Bao、Liang Chen、Weizhi Shao

计算技术、计算机技术

Chenglong Bao,Liang Chen,Weizhi Shao.A Stochastic Gradient Descent Method with Global Convergence for Minimizing Nearly Convex Functions[EB/OL].(2025-05-06)[2025-06-15].https://arxiv.org/abs/2505.03222.点此复制

评论