On the Stability and Convergence of Physics Informed Neural Networks
On the Stability and Convergence of Physics Informed Neural Networks
Physics Informed Neural Networks is a numerical method which uses neural networks to approximate solutions of partial differential equations. It has received a lot of attention and is currently used in numerous physical and engineering problems. The mathematical understanding of these methods is limited, and in particular, it seems that, a consistent notion of stability is missing. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide a consistent framework for stability. For time discrete training we show that if these properties fail to hold then methods may become unstable. Furthermore, using tools of $Î$- convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties. While our analysis is motivated by neural network-based approximation spaces, the framework developed here is applicable to any class of discrete functions satisfying the relevant approximation properties, and hence may serve as a foundation for the broader study of variational nonlinear PDE solvers.
Dimitrios Gazoulis、Ioannis Gkanis、Charalambos G. Makridakis
数学物理学
Dimitrios Gazoulis,Ioannis Gkanis,Charalambos G. Makridakis.On the Stability and Convergence of Physics Informed Neural Networks[EB/OL].(2025-07-09)[2025-07-25].https://arxiv.org/abs/2308.05423.点此复制
评论