|国家预印本平台
首页|Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization

来源:Arxiv_logoArxiv
英文摘要

ADAGB2, a generalization of the Adagrad algorithm for stochastic optimization is introduced, which is also applicable to bound-constrained problems and capable of using second-order information when available. It is shown that, given $\delta\in(0,1)$ and $\epsilon\in(0,1]$, the ADAGB2 algorithm needs at most $\calO(\epsilon^{-2})$ iterations to ensure an $\epsilon$-approximate first-order critical point of the bound-constrained problem with probability at least $1-\delta$, provided the average root mean square error of the gradient oracle is sufficiently small. Should this condition fail, it is also shown that the optimality level of iterates is bounded above by this average. The relation between the approximate and true classical projected-gradient-based optimality measures for bound constrained problems is also investigated, and it is shown that merely assuming unbiased gradient oracles may be insufficient to ensure convergence in $\calO(\epsilon^{-2})$ iterations.

S. Bellavia、S. Gratton、B. Morini、Ph. L. Toint

计算技术、计算机技术

S. Bellavia,S. Gratton,B. Morini,Ph. L. Toint.Fast Stochastic Second-Order Adagrad for Nonconvex Bound-Constrained Optimization[EB/OL].(2025-05-09)[2025-07-16].https://arxiv.org/abs/2505.06374.点此复制

评论