Thompson Sampling in Function Spaces via Neural Operators
Thompson Sampling in Function Spaces via Neural Operators
We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that functional evaluations are inexpensive, while queries to the operator (such as running a high-fidelity simulator) are costly. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process. We provide novel theoretical convergence guarantees, based on Gaussian processes in the infinite-dimensional setting, under minimal assumptions. We benchmark our method against existing baselines on functional optimization tasks involving partial differential equations and other nonlinear operator-driven phenomena, demonstrating improved sample efficiency and competitive performance.
Rafael Oliveira、Xuesong Wang、Kian Ming A. Chai、Edwin V. Bonilla
计算技术、计算机技术
Rafael Oliveira,Xuesong Wang,Kian Ming A. Chai,Edwin V. Bonilla.Thompson Sampling in Function Spaces via Neural Operators[EB/OL].(2025-06-27)[2025-07-16].https://arxiv.org/abs/2506.21894.点此复制
评论