Application-specific Machine-Learned Interatomic Potentials: Exploring the Trade-off Between Precision and Computational Cost
Application-specific Machine-Learned Interatomic Potentials: Exploring the Trade-off Between Precision and Computational Cost
Machine-learned interatomic potentials (MLIPs) are revolutionizing computational materials science and chemistry by offering an efficient alternative to {\em ab initio} molecular dynamics (MD) simulations. However, fitting high-quality MLIPs remains a challenging, time-consuming, and computationally intensive task where numerous trade-offs have to be considered, e.g. How much and what kind of atomic configurations should be included in the training set? Which level of {\em ab initio} convergence should be used to generate the training set? Which loss function should be used for fitting the MLIP? Which machine learning architecture should be used to train the MLIP? The answers to these questions significantly impact both the computational cost of MLIP training and the accuracy and computational cost of subsequent MLIP MD simulations. In this study, we highlight that simultaneously considering training set selection strategies, energy versus force weighting, precision of the {\em ab initio} reference simulations, as well as model complexity and computational cost of MLIPs can lead to a significant reduction in the overall computational cost associated with training and evaluating MLIPs. This opens the door to computationally efficient generation of high-quality MLIPs for a range of applications which demand different accuracy versus training and evaluation cost trade-offs.
Ilgar Baghishov、Jan Janssen、Graeme Henkelman、Danny Perez
计算技术、计算机技术
Ilgar Baghishov,Jan Janssen,Graeme Henkelman,Danny Perez.Application-specific Machine-Learned Interatomic Potentials: Exploring the Trade-off Between Precision and Computational Cost[EB/OL].(2025-06-05)[2025-06-15].https://arxiv.org/abs/2506.05646.点此复制
评论