Multi-fidelity learning for interatomic potentials: Low-level forces and high-level energies are all you need (2505.01590v2)
Abstract: The promise of machine learning interatomic potentials (MLIPs) has led to an abundance of public quantum mechanical (QM) training datasets. The quality of an MLIP is directly limited by the accuracy of the energies and atomic forces in the training dataset. Unfortunately, most of these datasets are computed with relatively low-accuracy QM methods, e.g., density functional theory with a moderate basis set. Due to the increased computational cost of more accurate QM methods, e.g., coupled-cluster theory with a complete basis set extrapolation, most high-accuracy datasets are much smaller and often do not contain atomic forces. The lack of high-accuracy atomic forces is quite troubling, as training with force data greatly improves the stability and quality of the MLIP compared to training to energy alone. Because most datasets are computed with a unique level of theory, traditional single-fidelity learning is not capable of leveraging the vast amounts of published QM data. In this study, we apply multi-fidelity learning to train an MLIP to multiple QM datasets of different levels of accuracy, i.e., levels of fidelity. Specifically, we perform three test cases to demonstrate that multi-fidelity learning with both low-level forces and high-level energies yields an extremely accurate MLIP -- far more accurate than a single-fidelity MLIP trained solely to high-level energies and almost as accurate as a single-fidelity MLIP trained directly to high-level energies and forces. Therefore, multi-fidelity learning greatly alleviates the need for generating large and expensive datasets containing high-accuracy atomic forces and allows for more effective training to existing high-accuracy energy-only datasets. Indeed, low-accuracy atomic forces and high-accuracy energies are all that are needed to achieve a high-accuracy MLIP with multi-fidelity learning.