Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ensemble Knowledge Distillation for Machine Learning Interatomic Potentials (2503.14293v3)

Published 18 Mar 2025 in physics.chem-ph and cs.AI

Abstract: The quality of machine learning interatomic potentials (MLIPs) strongly depends on the quantity of training data as well as the quantum chemistry (QC) level of theory used. Datasets generated with high-fidelity QC methods are typically restricted to small molecules and may be missing energy gradients, which make it difficult to train accurate MLIPs. We present an ensemble knowledge distillation (EKD) method to improve MLIP accuracy when trained to energy-only datasets. First, multiple teacher models are trained to QC energies and then generate atomic forces for all configurations in the dataset. Next, the student MLIP is trained to both QC energies and to ensemble-averaged forces generated by the teacher models. We apply this workflow on the ANI-1ccx dataset where the configuration energies computed at the coupled cluster level of theory. The resulting student MLIPs achieve new state-of-the-art accuracy on the COMP6 benchmark and show improved stability for molecular dynamics simulations.

Summary

We haven't generated a summary for this paper yet.