Understanding multi-fidelity training of machine-learned force-fields (2506.14963v1)
Abstract: Effectively leveraging data from multiple quantum-chemical methods is essential for building machine-learned force fields (MLFFs) that are applicable to a wide range of chemical systems. This study systematically investigates two multi-fidelity training strategies, pre-training/fine-tuning and multi-headed training, to elucidate the mechanisms underpinning their success. We identify key factors driving the efficacy of pre-training followed by fine-tuning, but find that internal representations learned during pre-training are inherently method-specific, requiring adaptation of the model backbone during fine-tuning. Multi-headed models offer an extensible alternative, enabling simultaneous training on multiple fidelities. We demonstrate that a multi-headed model learns method-agnostic representations that allow for accurate predictions across multiple label sources. While this approach introduces a slight accuracy compromise compared to sequential fine-tuning, it unlocks new cost-efficient data generation strategies and paves the way towards developing universal MLFFs.