A Study on the Fine-Tuning Performance of Universal Machine-Learned Interatomic Potentials (U-MLIPs) (2506.07401v1)
Abstract: Universal machine-learned interatomic potentials (U-MLIPs) have demonstrated effectiveness across diverse atomistic systems but often require fine-tuning for task-specific accuracy. We investigate the fine-tuning of two MACE-based foundation models, MACE-MP-0 and its variant MACE-MP-0b, and identify key insights. Fine-tuning on task-specific datasets enhances accuracy and, in some cases, outperforms models trained from scratch. Additionally, fine-tuned models benefit from faster convergence due to the strong initial predictions provided by the foundation model. The success of fine-tuning also depends on careful dataset selection, which can be optimized through filtering or active learning. We further discuss practical strategies for achieving better fine-tuning foundation models in atomistic simulations and explore future directions for their development and applications.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.