|国家预印本平台
首页|Understanding multi-fidelity training of machine-learned force-fields

Understanding multi-fidelity training of machine-learned force-fields

Understanding multi-fidelity training of machine-learned force-fields

来源:Arxiv_logoArxiv
英文摘要

Effectively leveraging data from multiple quantum-chemical methods is essential for building machine-learned force fields (MLFFs) that are applicable to a wide range of chemical systems. This study systematically investigates two multi-fidelity training strategies, pre-training/fine-tuning and multi-headed training, to elucidate the mechanisms underpinning their success. We identify key factors driving the efficacy of pre-training followed by fine-tuning, but find that internal representations learned during pre-training are inherently method-specific, requiring adaptation of the model backbone during fine-tuning. Multi-headed models offer an extensible alternative, enabling simultaneous training on multiple fidelities. We demonstrate that a multi-headed model learns method-agnostic representations that allow for accurate predictions across multiple label sources. While this approach introduces a slight accuracy compromise compared to sequential fine-tuning, it unlocks new cost-efficient data generation strategies and paves the way towards developing universal MLFFs.

John L. A. Gardner、Hannes Schulz、Jean Helie、Lixin Sun、Gregor N. C. Simm

化学计算技术、计算机技术

John L. A. Gardner,Hannes Schulz,Jean Helie,Lixin Sun,Gregor N. C. Simm.Understanding multi-fidelity training of machine-learned force-fields[EB/OL].(2025-06-17)[2025-07-16].https://arxiv.org/abs/2506.14963.点此复制

评论