On the role of gradients for machine learning of molecular energies and forces (2007.09593v1)
Abstract: The accuracy of any machine learning potential can only be as good as the data used in the fitting process. The most efficient model therefore selects the training data that will yield the highest accuracy compared to the cost of obtaining the training data. We investigate the convergence of prediction errors of quantum machine learning models for organic molecules trained on energy and force labels, two common data types in molecular simulations. When training and predicting on different geometries corresponding to the same single molecule, we find that the inclusion of atomic forces in the training data increases the accuracy of the predicted energies and forces 7-fold, compared to models trained on energy only. Surprisingly, for models trained on sets of organic molecules of varying size and composition in non-equilibrium conformations, inclusion of forces in the training does not improve the predicted energies of unseen molecules in new conformations. Predicted forces, however, also improve about 7-fold. For the systems studied, we find that force labels and energy labels contribute equally per label to the convergence of the prediction errors. Choosing to include derivatives such as atomic forces in the training set or not should thus depend on, not only on the computational cost of acquiring the force labels for training, but also on the application domain, the property of interest, and the desirable size of the machine learning model. Based on our observations we describe key considerations for the creation of datasets for potential energy surfaces of molecules which maximize the efficiency of the resulting machine learning models.