Discretization-independent multifidelity operator learning for partial differential equations
Discretization-independent multifidelity operator learning for partial differential equations
We develop a new and general encode-approximate-reconstruct operator learning model that leverages learned neural representations of bases for input and output function distributions. We introduce the concepts of \textit{numerical operator learning} and \textit{discretization independence}, which clarify the relationship between theoretical formulations and practical realizations of operator learning models. Our model is discretization-independent, making it particularly effective for multifidelity learning. We establish theoretical approximation guarantees, demonstrating uniform universal approximation under strong assumptions on the input functions and statistical approximation under weaker conditions. To our knowledge, this is the first comprehensive study that investigates how discretization independence enables robust and efficient multifidelity operator learning. We validate our method through extensive numerical experiments involving both local and nonlocal PDEs, including time-independent and time-dependent problems. The results show that multifidelity training significantly improves accuracy and computational efficiency. Moreover, multifidelity training further enhances empirical discretization independence.
Jacob Hauck、Yanzhi Zhang
数学
Jacob Hauck,Yanzhi Zhang.Discretization-independent multifidelity operator learning for partial differential equations[EB/OL].(2025-07-09)[2025-07-21].https://arxiv.org/abs/2507.07292.点此复制
评论