|国家预印本平台
首页|Gradient Methods with Online Scaling Part I. Theoretical Foundations

Gradient Methods with Online Scaling Part I. Theoretical Foundations

Gradient Methods with Online Scaling Part I. Theoretical Foundations

来源:Arxiv_logoArxiv
英文摘要

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning algorithm. Consequently, instantiations of OSGM achieve convergence rates that are asymptotically no worse than the optimal stepsize. OSGM yields desirable convergence guarantees on smooth convex problems, including 1) trajectory-dependent global convergence on smooth convex objectives; 2) an improved complexity result on smooth strongly convex problems, and 3) local superlinear convergence. Notably, OSGM constitutes a new family of first-order methods with non-asymptotic superlinear convergence, joining the celebrated quasi-Newton methods. Finally, OSGM explains the empirical success of the popular hypergradient-descent heuristic in optimization for machine learning.

Wenzhi Gao、Ya-Chi Chu、Yinyu Ye、Madeleine Udell

计算技术、计算机技术

Wenzhi Gao,Ya-Chi Chu,Yinyu Ye,Madeleine Udell.Gradient Methods with Online Scaling Part I. Theoretical Foundations[EB/OL].(2025-05-29)[2025-06-22].https://arxiv.org/abs/2505.23081.点此复制

评论