Machine learning of kinetic energy densities with target and feature averaging: better results with fewer training data (2309.03482v2)
Abstract: Machine learning of kinetic energy functionals (KEF), in particular kinetic energy density (KED) functionals, has recently attracted attention as a promising way to construct KEFs for orbital-free density functional theory (OF-DFT). Neural networks (NN) and kernel methods including Gaussian process regression (GPR) have been used to learn Kohn-Sham (KS) KED from density-based descriptors derived from KS DFT calculations. The descriptors are typically expressed as functions of different powers and derivatives of the electron density. This can generate large and extremely unevenly distributed datasets, which complicates effective application of machine learning techniques. Very uneven data distributions require many training data points, can cause overfitting, and ultimately lower the quality of a ML KED model. We show that one can produce more accurate ML models from fewer data by working with partially averaged density-dependent variables and KED. Averaging palliates the issue of very uneven data distributions and associated difficulties of sampling, while retaining enough spatial structure necessary for working within the paradigm of KEDF. We use GPR as a function of partially spatially averaged terms of the 4th order gradient expansion and the Kohn-Sham effective potential and obtain accurate and stable (with respect to different random choices of training points) kinetic energy models for Al, Mg, and Si simultaneously from as few as 2000 samples (about 0.3% of the total KS DFT data). In particular, accuracies on the order of 1% in a measure of the quality of energy-volume dependence B' = \frac{E(V_0-\Delta V)-2E(V_0)+E(V_0 + \Delta V)}{\big(\frac{\Delta V}{V_0}\big)2} are obtained simultaneously for all three materials.