Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$
Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$
We show that there are no non-trivial closed subspaces of $L_2(\mathbb{R}^n)$ that are invariant under invertible affine transformations. We apply this result to neural networks showing that any nonzero $L_2(\mathbb{R})$ function is an adequate activation function in a one hidden layer neural network in order to approximate every function in $L_2(\mathbb{R})$ with any desired accuracy. This generalizes the universal approximation properties of neural networks in $L_2(\mathbb{R})$ related to Wiener's Tauberian Theorems. Our results extend to the spaces $L_p(\mathbb{R})$ with $p>1$.
Cornelia Schneider、Samuel Probst
数学
Cornelia Schneider,Samuel Probst.Improved universal approximation with neural networks studied via affine-invariant subspaces of $L_2(\mathbb{R}^n)$[EB/OL].(2025-04-03)[2025-05-24].https://arxiv.org/abs/2504.02445.点此复制
评论