Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Small-Cell-Based Fast Active Learning of Machine Learning Interatomic Potentials (2504.07293v1)

Published 9 Apr 2025 in cond-mat.mtrl-sci

Abstract: Machine learning interatomic potentials (MLIPs) are often trained with on-the-fly active learning, where sampled configurations from atomistic simulations are added to the training set. However, this approach is limited by the high computational cost of ab initio calculations for large systems. Recent works have shown that MLIPs trained on small cells (1-8 atoms) rival the accuracy of large-cell models (100s of atoms) at far lower computational cost. Herein, we refer to these as small-cell and large-cell training, respectively. In this work, we iterate on earlier small-cell training approaches and characterize our resultant small-cell protocol. Potassium and sodium-potassium systems were studied: the former, a simpler system benchmarked in detail; the latter, a more complex binary system for further validation. Our small-cell training approach achieves up to two orders of magnitude of cost savings compared to large-cell (54-atom) training, with some training runs requiring fewer than 120 core-hours. Static and thermodynamic properties predicted using the MLIPs were evaluated, with small-cell training in both systems yielding strong ab initio agreement. Small cells appear to encode the necessary information to model complex large-scale phenomena--solid-liquid interfaces, critical exponents, diverse concentrations--even when the training cells themselves are too small to accommodate these phenomena. Based on these tests, we provide analysis and recommendations.

Summary

We haven't generated a summary for this paper yet.