|国家预印本平台
首页|Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion

Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion

Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion

来源:Arxiv_logoArxiv
英文摘要

We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a $(δ,ε)$-stationary point from $O(ε^{-4}δ^{-1})$ stochastic gradient queries to $O(ε^{-3}δ^{-1})$, which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to online learning, after which our results follow from standard regret bounds in online learning. For deterministic and second-order smooth objectives, applying more advanced optimistic online learning techniques enables a new complexity of $O(ε^{-1.5}δ^{-0.5})$. Our techniques also recover all optimal or best-known results for finding $ε$ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings.

Ashok Cutkosky、Harsh Mehta、Francesco Orabona

计算技术、计算机技术

Ashok Cutkosky,Harsh Mehta,Francesco Orabona.Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2302.03775.点此复制

评论