AutoSGD: Automatic Learning Rate Selection for Stochastic Gradient Descent
AutoSGD: Automatic Learning Rate Selection for Stochastic Gradient Descent
The learning rate is an important tuning parameter for stochastic gradient descent (SGD) and can greatly influence its performance. However, appropriate selection of a learning rate schedule across all iterations typically requires a non-trivial amount of user tuning effort. To address this, we introduce AutoSGD: an SGD method that automatically determines whether to increase or decrease the learning rate at a given iteration and then takes appropriate action. We introduce theory supporting the convergence of AutoSGD, along with its deterministic counterpart for standard gradient descent. Empirical results suggest strong performance of the method on a variety of traditional optimization problems and machine learning tasks.
Nikola Surjanovic、Alexandre Bouchard-C?té、Trevor Campbell
计算技术、计算机技术
Nikola Surjanovic,Alexandre Bouchard-C?té,Trevor Campbell.AutoSGD: Automatic Learning Rate Selection for Stochastic Gradient Descent[EB/OL].(2025-05-27)[2025-06-12].https://arxiv.org/abs/2505.21651.点此复制
评论