|国家预印本平台
首页|DPG loss functions for learning parameter-to-solution maps by neural networks

DPG loss functions for learning parameter-to-solution maps by neural networks

DPG loss functions for learning parameter-to-solution maps by neural networks

来源:Arxiv_logoArxiv
英文摘要

We develop, analyze, and experimentally explore residual-based loss functions for machine learning of parameter-to-solution maps in the context of parameter-dependent families of partial differential equations (PDEs). Our primary concern is on rigorous accuracy certification to enhance prediction capability of resulting deep neural network reduced models. This is achieved by the use of variationally correct loss functions. Through one specific example of an elliptic PDE, details for establishing the variational correctness of a loss function from an ultraweak Discontinuous Petrov Galerkin (DPG) discretization are worked out. Despite the focus on the example, the proposed concepts apply to a much wider scope of problems, namely problems for which stable DPG formulations are available. The issue of {high-contrast} diffusion fields and ensuing difficulties with degrading ellipticity are discussed. Both numerical results and theoretical arguments illustrate that for high-contrast diffusion parameters the proposed DPG loss functions deliver much more robust performance than simpler least-squares losses.

Pablo Cort??s Castillo、Wolfgang Dahmen、Jay Gopalakrishnan

数学

Pablo Cort??s Castillo,Wolfgang Dahmen,Jay Gopalakrishnan.DPG loss functions for learning parameter-to-solution maps by neural networks[EB/OL].(2025-06-23)[2025-07-21].https://arxiv.org/abs/2506.18773.点此复制

评论