Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime
We study the convergence of gradient methods for the training of mean-field single hidden layer neural networks with square loss. Observing this is a separable non-linear least-square problem which is linear w.r.t. the outer layer's weights, we consider a Variable Projection (VarPro) or two-timescale learning algorithm, thereby eliminating the linear variables and reducing the learning problem to the training of the feature distribution. Whereas most convergence rates or the training of neural networks rely on a neural tangent kernel analysis where features are fixed, we show such a strategy enables provable convergence rates for the sampling of a teacher feature distribution. Precisely, in the limit where the regularization strength vanishes, we show that the dynamic of the feature distribution corresponds to a weighted ultra-fast diffusion equation. Relying on recent results on the asymptotic behavior of such PDEs, we obtain guarantees for the convergence of the trained feature distribution towards the teacher feature distribution in a teacher-student setup.
Rapha?l Barboni、Gabriel Peyré、Fran?ois-Xavier Vialard
éNS-PSLCNRS and éNS-PSLLIGM
计算技术、计算机技术
Rapha?l Barboni,Gabriel Peyré,Fran?ois-Xavier Vialard.Ultra-fast feature learning for the training of two-layer neural networks in the two-timescale regime[EB/OL].(2025-04-25)[2025-05-21].https://arxiv.org/abs/2504.18208.点此复制
评论