|国家预印本平台
首页|Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network

Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network

Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network

来源:Arxiv_logoArxiv
英文摘要

Adaptive gradient methods like AdaGrad are widely used in optimizing neural networks. Yet, existing convergence guarantees for adaptive gradient methods require either convexity or smoothness, and, in the smooth setting, only guarantee convergence to a stationary point. We propose an adaptive gradient method and show that for two-layer over-parameterized neural networks -- if the width is sufficiently large (polynomially) -- then the proposed method converges \emph{to the global minimum} in polynomial time, and convergence is robust, \emph{ without the need to fine-tune hyper-parameters such as the step-size schedule and with the level of over-parametrization independent of the training error}. Our analysis indicates in particular that over-parametrization is crucial for the harnessing the full potential of adaptive gradient methods in the setting of neural networks.

Simon S. Du、Rachel Ward、Xiaoxia Wu

计算技术、计算机技术

Simon S. Du,Rachel Ward,Xiaoxia Wu.Global Convergence of Adaptive Gradient Methods for An Over-parameterized Neural Network[EB/OL].(2019-02-19)[2025-07-23].https://arxiv.org/abs/1902.07111.点此复制

评论