Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients (2507.00528v1)

Published 1 Jul 2025 in cond-mat.dis-nn, cond-mat.mtrl-sci, and physics.comp-ph

Abstract: Machine-learning interatomic potentials (MLIPs) such as neuroevolution potentials (NEP) combine quantum-mechanical accuracy with computational efficiency significantly accelerate atomistic dynamic simulations. Trained by derivative-free optimization, the normal NEP achieves good accuracy, but suffers from inefficiency due to the high-dimensional parameter search. To overcome this problem, we present a gradient-optimized NEP (GNEP) training framework employing explicit analytical gradients and the Adam optimizer. This approach greatly improves training efficiency and convergence speedily while maintaining accuracy and physical interpretability. By applying GNEP to the training of Sb-Te material systems(datasets include crystalline, liquid, and disordered phases), the fitting time has been substantially reduced-often by orders of magnitude-compared to the NEP training framework. The fitted potentials are validated by DFT reference calculations, demonstrating satisfactory agreement in equation of state and radial distribution functions. These results confirm that GNEP retains high predictive accuracy and transferability while considerably improved computational efficiency, making it well-suited for large-scale molecular dynamics simulations.

Summary

We haven't generated a summary for this paper yet.