Locally Subspace-Informed Neural Operators for Efficient Multiscale PDE Solving
Locally Subspace-Informed Neural Operators for Efficient Multiscale PDE Solving
Neural operators (NOs) struggle with high-contrast multiscale partial differential equations (PDEs), where fine-scale heterogeneities cause large errors. To address this, we use the Generalized Multiscale Finite Element Method (GMsFEM) that constructs localized spectral basis functions on coarse grids. This approach efficiently captures dominant multiscale features while solving heterogeneous PDEs accurately at reduced computational cost. However, computing these basis functions is computationally expensive. This gap motivates our core idea: to use a NO to learn the subspace itself - rather than individual basis functions - by employing a subspace-informed loss. On standard multiscale benchmarks - namely a linear elliptic diffusion problem and the nonlinear, steady-state Richards equation - our hybrid method cuts solution error by approximately $60\%$ compared with standalone NOs and reduces basis-construction time by about $60$ times relative to classical GMsFEM, while remaining independent of forcing terms and boundary conditions. The result fuses multiscale finite-element robustness with NO speed, yielding a practical solver for heterogeneous PDEs.
Alexander Rudikov、Vladimir Fanaskov、Sergei Stepanov、Buzheng Shan、Ekaterina Muravleva、Yalchin Efendiev、Ivan Oseledets
数学工程基础科学
Alexander Rudikov,Vladimir Fanaskov,Sergei Stepanov,Buzheng Shan,Ekaterina Muravleva,Yalchin Efendiev,Ivan Oseledets.Locally Subspace-Informed Neural Operators for Efficient Multiscale PDE Solving[EB/OL].(2025-05-21)[2025-06-16].https://arxiv.org/abs/2505.16030.点此复制
评论