Uncertainty quantification of neural network models of evolving processes via Langevin sampling
Uncertainty quantification of neural network models of evolving processes via Langevin sampling
We propose a scalable, approximate inference hypernetwork framework for a general model of history-dependent processes. The flexible data model is based on a neural ordinary differential equation (NODE) representing the evolution of internal states together with a trainable observation model subcomponent. The posterior distribution corresponding to the data model parameters (weights and biases) follows a stochastic differential equation with a drift term related to the score of the posterior that is learned jointly with the data model parameters. This Langevin sampling approach offers flexibility in balancing the computational budget between the evaluation cost of the data model and the approximation of the posterior density of its parameters. We demonstrate performance of the ensemble sampling hypernetwork on chemical reaction and material physics data and compare it to standard variational inference.
Cosmin Safta、Reese E. Jones、Ravi G. Patel、Raelynn Wonnacot、Dan S. Bolintineanu、Craig M. Hamel、Sharlotte L. B. Kramer
物理学化学计算技术、计算机技术
Cosmin Safta,Reese E. Jones,Ravi G. Patel,Raelynn Wonnacot,Dan S. Bolintineanu,Craig M. Hamel,Sharlotte L. B. Kramer.Uncertainty quantification of neural network models of evolving processes via Langevin sampling[EB/OL].(2025-04-21)[2025-07-01].https://arxiv.org/abs/2504.14854.点此复制
评论