|国家预印本平台
首页|HiPreNets: High-Precision Neural Networks through Progressive Training

HiPreNets: High-Precision Neural Networks through Progressive Training

HiPreNets: High-Precision Neural Networks through Progressive Training

来源:Arxiv_logoArxiv
英文摘要

Deep neural networks are powerful tools for solving nonlinear problems in science and engineering, but training highly accurate models becomes challenging as problem complexity increases. Non-convex optimization and numerous hyperparameters to tune make performance improvement difficult, and traditional approaches often prioritize minimizing mean squared error (MSE) while overlooking $L^{\infty}$ error, which is the critical focus in many applications. To address these challenges, we present a progressive framework for training and tuning high-precision neural networks (HiPreNets). Our approach refines a previously explored staged training technique for neural networks that improves an existing fully connected neural network by sequentially learning its prediction residuals using additional networks, leading to improved overall accuracy. We discuss how to take advantage of the structure of the residuals to guide the choice of loss function, number of parameters to use, and ways to introduce adaptive data sampling techniques. We validate our framework's effectiveness through several benchmark problems.

Ethan Mulle、Wei Kang、Qi Gong

计算技术、计算机技术

Ethan Mulle,Wei Kang,Qi Gong.HiPreNets: High-Precision Neural Networks through Progressive Training[EB/OL].(2025-06-17)[2025-07-18].https://arxiv.org/abs/2506.15064.点此复制

评论