|国家预印本平台
首页|Gradient Descent as a Shrinkage Operator for Spectral Bias

Gradient Descent as a Shrinkage Operator for Spectral Bias

Gradient Descent as a Shrinkage Operator for Spectral Bias

来源:Arxiv_logoArxiv
英文摘要

We generalize the connection between activation function and spline regression/smoothing and characterize how this choice may influence spectral bias within a 1D shallow network. We then demonstrate how gradient descent (GD) can be reinterpreted as a shrinkage operator that masks the singular values of a neural network's Jacobian. Viewed this way, GD implicitly selects the number of frequency components to retain, thereby controlling the spectral bias. An explicit relationship is proposed between the choice of GD hyperparameters (learning rate & number of iterations) and bandwidth (the number of active components). GD regularization is shown to be effective only with monotonic activation functions. Finally, we highlight the utility of non-monotonic activation functions (sinc, Gaussian) as iteration-efficient surrogates for spectral bias.

Simon Lucey

计算技术、计算机技术

Simon Lucey.Gradient Descent as a Shrinkage Operator for Spectral Bias[EB/OL].(2025-04-25)[2025-05-05].https://arxiv.org/abs/2504.18207.点此复制

评论