Entropic bounds for conditionally Gaussian vectors and applications to neural networks
Entropic bounds for conditionally Gaussian vectors and applications to neural networks
Using entropic inequalities from information theory, we provide new bounds on the total variation and 2-Wasserstein distances between a conditionally Gaussian law and a Gaussian law with invertible covariance matrix. We apply our results to quantify the speed of convergence to Gaussian of a randomly initialized fully connected neural network and its derivatives - evaluated in a finite number of inputs - when the initialization is Gaussian and the sizes of the inner layers diverge to infinity. Our results require mild assumptions on the activation function, and allow one to recover optimal rates of convergence in a variety of distances, thus improving and extending the findings of Basteri and Trevisan (2023), Favaro et al. (2023), Trevisan (2024) and Apollonio et al. (2024). One of our main tools are the quantitative cumulant estimates established in Hanin (2024). As an illustration, we apply our results to bound the total variation distance between the Bayesian posterior law of the neural network and its derivatives, and the posterior law of the corresponding Gaussian limit: this yields quantitative versions of a posterior CLT by Hron et al. (2022), and extends several estimates by Trevisan (2024) to the total variation metric.
Lucia Celli、Giovanni Peccati
数学计算技术、计算机技术
Lucia Celli,Giovanni Peccati.Entropic bounds for conditionally Gaussian vectors and applications to neural networks[EB/OL].(2025-04-11)[2025-05-18].https://arxiv.org/abs/2504.08335.点此复制
评论