|国家预印本平台
首页|Uncertainty Quantification From Scaling Laws in Deep Neural Networks

Uncertainty Quantification From Scaling Laws in Deep Neural Networks

Uncertainty Quantification From Scaling Laws in Deep Neural Networks

来源:Arxiv_logoArxiv
英文摘要

Quantifying the uncertainty from machine learning analyses is critical to their use in the physical sciences. In this work we focus on uncertainty inherited from the initialization distribution of neural networks. We compute the mean $\mu_{\mathcal{L}}$ and variance $\sigma_{\mathcal{L}}^2$ of the test loss $\mathcal{L}$ for an ensemble of multi-layer perceptrons (MLPs) with neural tangent kernel (NTK) initialization in the infinite-width limit, and compare empirically to the results from finite-width networks for three example tasks: MNIST classification, CIFAR classification and calorimeter energy regression. We observe scaling laws as a function of training set size $N_\mathcal{D}$ for both $\mu_{\mathcal{L}}$ and $\sigma_{\mathcal{L}}$, but find that the coefficient of variation $\epsilon_{\mathcal{L}} \equiv \sigma_{\mathcal{L}}/\mu_{\mathcal{L}}$ becomes independent of $N_\mathcal{D}$ at both infinite and finite width for sufficiently large $N_\mathcal{D}$. This implies that the coefficient of variation of a finite-width network may be approximated by its infinite-width value, and may in principle be calculable using finite-width perturbation theory.

Ibrahim Elsharkawy、Yonatan Kahn、Benjamin Hooberman

计算技术、计算机技术

Ibrahim Elsharkawy,Yonatan Kahn,Benjamin Hooberman.Uncertainty Quantification From Scaling Laws in Deep Neural Networks[EB/OL].(2025-03-07)[2025-04-29].https://arxiv.org/abs/2503.05938.点此复制

评论