Efficient GPU-Accelerated Training of a Neuroevolution Potential with Analytical Gradients (2507.00528v1)
Abstract: Machine-learning interatomic potentials (MLIPs) such as neuroevolution potentials (NEP) combine quantum-mechanical accuracy with computational efficiency significantly accelerate atomistic dynamic simulations. Trained by derivative-free optimization, the normal NEP achieves good accuracy, but suffers from inefficiency due to the high-dimensional parameter search. To overcome this problem, we present a gradient-optimized NEP (GNEP) training framework employing explicit analytical gradients and the Adam optimizer. This approach greatly improves training efficiency and convergence speedily while maintaining accuracy and physical interpretability. By applying GNEP to the training of Sb-Te material systems(datasets include crystalline, liquid, and disordered phases), the fitting time has been substantially reduced-often by orders of magnitude-compared to the NEP training framework. The fitted potentials are validated by DFT reference calculations, demonstrating satisfactory agreement in equation of state and radial distribution functions. These results confirm that GNEP retains high predictive accuracy and transferability while considerably improved computational efficiency, making it well-suited for large-scale molecular dynamics simulations.