|国家预印本平台
首页|Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients

Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients

Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients

来源:Arxiv_logoArxiv
英文摘要

Machine-learning interatomic potentials (MLIPs) such as neuroevolution potentials (NEP) combine quantum-mechanical accuracy with computational efficiency significantly accelerate atomistic dynamic simulations. Trained by derivative-free optimization, the normal NEP achieves good accuracy, but suffers from inefficiency due to the high-dimensional parameter search. To overcome this problem, we present a gradient-optimized NEP (GNEP) training framework employing explicit analytical gradients and the Adam optimizer. This approach greatly improves training efficiency and convergence speedily while maintaining accuracy and physical interpretability. By applying GNEP to the training of Sb-Te material systems(datasets include crystalline, liquid, and disordered phases), the fitting time has been substantially reduced-often by orders of magnitude-compared to the NEP training framework. The fitted potentials are validated by DFT reference calculations, demonstrating satisfactory agreement in equation of state and radial distribution functions. These results confirm that GNEP retains high predictive accuracy and transferability while considerably improved computational efficiency, making it well-suited for large-scale molecular dynamics simulations.

Hongfu Huang、Junhao Peng、Kaiqi Li、Jian Zhou、Zhimei Sun

物理学

Hongfu Huang,Junhao Peng,Kaiqi Li,Jian Zhou,Zhimei Sun.Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients[EB/OL].(2025-07-01)[2025-07-16].https://arxiv.org/abs/2507.00528.点此复制

评论