|国家预印本平台
首页|基于神经网络的非线性随机梯度学习算法及其收敛性

基于神经网络的非线性随机梯度学习算法及其收敛性

Nonlinear stochastic variance reduction gradient based neural networks and its convergence

中文摘要英文摘要

在实际问题中,大量的最优问题是非凸最优问题。而随机梯度学习算法(SVRG)提供了针对这类问题的一种解决方法,即该算法能够通过训练神经网络的方法来解决非凸最优问题。特别地,目标函数是非线性情况下的随机梯度学习算法在本文被提出。并且在温和的条件下,误差函数的单调性被得到。然后,本文建立了常学习率下的非线性随机梯度学习算法的弱收敛特性。弱收敛性即误差函数的梯度趋于0。最后,给出的数值实验证明了理论结果的有效性。

A large number of optimization problems are nonconvex optimization problems. Stochastic variance reduced gradient (SVRG) algorithm provides a solution, which can solve the nonconvex optimization problems through training neural networks. In this paper, nonlinear stochastic variance reduced gradient method (NSVRG) is proposed, in which the objective function is nonlinear. Under mild conditions, the monotonicity of the error function is obtained. We then establish the weak convergence property with a constant learning rate. The weak convergence indicates that the gradient of the error function goes to zero. Finally, numerical example is given to substantiate the effectiveness of the theoretical results.

高涛、张华清、王健、 孙占全、杨国玲

计算技术、计算机技术自动化基础理论

前馈神经网络非凸随机梯度学习算法单调收敛

feedforward neural network nonconvex stochastic variance reduced gradient algorithm monotonicity convergence

高涛,张华清,王健, 孙占全,杨国玲.基于神经网络的非线性随机梯度学习算法及其收敛性[EB/OL].(2017-05-25)[2025-08-02].http://www.paper.edu.cn/releasepaper/content/201705-1316.点此复制

评论