Computational Math with Neural Networks is Hard
Computational Math with Neural Networks is Hard
We show that under some widely believed assumptions, there are no higher-order algorithms for basic tasks in computational mathematics such as: Computing integrals with neural network integrands, computing solutions of a Poisson equation with neural network source term, and computing the matrix-vector product with a neural network encoded matrix. We show that this is already true for very simple feed-forward networks with at least three hidden layers, bounded weights, bounded realization, and sparse connectivity, even if the algorithms are allowed to access the weights of the network. The fundamental idea behind these results is that it is already very hard to check whether a given neural network represents the zero function. The non-locality of the problems above allow us to reduce the approximation setting to deciding whether the input is zero or not. We demonstrate sharpness of our results by providing fast quadrature algorithms for one-layer networks and giving numerical evidence that quasi-Monte Carlo methods achieve the best possible order of convergence for quadrature with neural networks.
Michael Feischl、Fabian Zehetgruber
数学
Michael Feischl,Fabian Zehetgruber.Computational Math with Neural Networks is Hard[EB/OL].(2025-05-23)[2025-06-05].https://arxiv.org/abs/2505.17751.点此复制
评论