|国家预印本平台
首页|Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks

Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks

Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks

来源:Arxiv_logoArxiv
英文摘要

We establish the asymptotic implicit bias of gradient descent (GD) for generic non-homogeneous deep networks under exponential loss. Specifically, we characterize three key properties of GD iterates starting from a sufficiently small empirical risk, where the threshold is determined by a measure of the network's non-homogeneity. First, we show that a normalized margin induced by the GD iterates increases nearly monotonically. Second, we prove that while the norm of the GD iterates diverges to infinity, the iterates themselves converge in direction. Finally, we establish that this directional limit satisfies the Karush-Kuhn-Tucker (KKT) conditions of a margin maximization problem. Prior works on implicit bias have focused exclusively on homogeneous networks; in contrast, our results apply to a broad class of non-homogeneous networks satisfying a mild near-homogeneity condition. In particular, our results apply to networks with residual connections and non-homogeneous activation functions, thereby resolving an open problem posed by Ji and Telgarsky (2020).

Yuhang Cai、Kangjie Zhou、Jingfeng Wu、Song Mei、Michael Lindsey、Peter L. Bartlett

计算技术、计算机技术

Yuhang Cai,Kangjie Zhou,Jingfeng Wu,Song Mei,Michael Lindsey,Peter L. Bartlett.Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks[EB/OL].(2025-07-15)[2025-08-15].https://arxiv.org/abs/2502.16075.点此复制

评论