Convergence of Implicit Gradient Descent for Training Two-Layer Physics-Informed Neural Networks
Convergence of Implicit Gradient Descent for Training Two-Layer Physics-Informed Neural Networks
The optimization algorithms are crucial in training physics-informed neural networks (PINNs), as unsuitable methods may lead to poor solutions. Compared to the common gradient descent (GD) algorithm, implicit gradient descent (IGD) outperforms it in handling certain multi-scale problems. In this paper, we provide convergence analysis for the IGD in training over-parameterized two-layer PINNs. We first derive the training dynamics of IGD in training two-layer PINNs. Then, over-parameterization allows us to prove that the randomly initialized IGD converges to a globally optimal solution at a linear convergence rate. Moreover, due to the distinct training dynamics of IGD compared to GD, the learning rate can be selected independently of the sample size and the least eigenvalue of the Gram matrix. Additionally, the novel approach used in our convergence analysis imposes a milder requirement on the network width. Finally, empirical results validate our theoretical findings.
Xianliang Xu、Ting Du、Wang Kong、Bin Shan、Ye Li、Zhongyi Huang
计算技术、计算机技术
Xianliang Xu,Ting Du,Wang Kong,Bin Shan,Ye Li,Zhongyi Huang.Convergence of Implicit Gradient Descent for Training Two-Layer Physics-Informed Neural Networks[EB/OL].(2025-08-01)[2025-08-11].https://arxiv.org/abs/2407.02827.点此复制
评论