|国家预印本平台
首页|Adaptive Acceleration Without Strong Convexity Priors Or Restarts

Adaptive Acceleration Without Strong Convexity Priors Or Restarts

Adaptive Acceleration Without Strong Convexity Priors Or Restarts

来源:Arxiv_logoArxiv
英文摘要

Accelerated optimization methods are typically parametrized by the Lipschitz smoothness and strong convexity parameters of a given problem, which are rarely known in practice. While the Lipschitz smoothness parameter L can be easily estimated by backtracking, directly estimating (i.e., without restarts) the strong convexity parameter m remains an open problem. This paper addresses this challenge by proposing NAG-free, an optimization method that achieves acceleration while estimating m online without priors or restarts. Our method couples Nesterov's accelerated gradient (NAG) method with an inexpensive estimator that does not require additional computation, only storage of one more iterate and gradient. We prove that in the canonical setting of the open problem, NAG-free converges globally at least as fast as gradient descent and locally at an accelerated rate, without resorting to m priors or restarts. We also introduce a complementary estimator for L, symmetric to the m estimator. We provide extensive empirical evidence that this NAG-free heuristic with L and m estimators often achieves acceleration in practice, outperforming restart schemes and traditional accelerated methods parametrized by offline estimates of L and m. Our experiments also suggest that NAG-free might be effective even for nonconvex problems when combined with restarts.

Joao V. Cavalcanti、Laurent Lessard、Ashia C. Wilson

计算技术、计算机技术

Joao V. Cavalcanti,Laurent Lessard,Ashia C. Wilson.Adaptive Acceleration Without Strong Convexity Priors Or Restarts[EB/OL].(2025-06-23)[2025-07-22].https://arxiv.org/abs/2506.13033.点此复制

评论