|国家预印本平台
首页|Stochastic gradient descent based variational inference for infinite-dimensional inverse problems

Stochastic gradient descent based variational inference for infinite-dimensional inverse problems

Stochastic gradient descent based variational inference for infinite-dimensional inverse problems

来源:Arxiv_logoArxiv
英文摘要

This paper introduces two variational inference approaches for infinite-dimensional inverse problems, developed through gradient descent with a constant learning rate. The proposed methods enable efficient approximate sampling from the target posterior distribution using a constant-rate stochastic gradient descent (cSGD) iteration. Specifically, we introduce a randomization strategy that incorporates stochastic gradient noise, allowing the cSGD iteration to be viewed as a discrete-time process. This transformation establishes key relationships between the covariance operators of the approximate and true posterior distributions, thereby validating cSGD as a variational inference method. We also investigate the regularization properties of the cSGD iteration and provide a theoretical analysis of the discretization error between the approximated posterior mean and the true background function. Building on this framework, we develop a preconditioned version of cSGD to further improve sampling efficiency. Finally, we apply the proposed methods to two practical inverse problems: one governed by a simple smooth equation and the other by the steady-state Darcy flow equation. Numerical results confirm our theoretical findings and compare the sampling performance of the two approaches for solving linear and non-linear inverse problems.

Jiaming Sui、Junxiong Jia、Jinglai Li

数学

Jiaming Sui,Junxiong Jia,Jinglai Li.Stochastic gradient descent based variational inference for infinite-dimensional inverse problems[EB/OL].(2025-06-09)[2025-07-16].https://arxiv.org/abs/2506.08380.点此复制

评论