Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DeePKS+ABACUS as a Bridge between Expensive Quantum Mechanical Models and Machine Learning Potentials (2206.10093v2)

Published 21 Jun 2022 in physics.chem-ph, cs.LG, and physics.comp-ph

Abstract: Recently, the development of ML potentials has made it possible to perform large-scale and long-time molecular simulations with the accuracy of quantum mechanical (QM) models. However, for high-level QM methods, such as density functional theory (DFT) at the meta-GGA level and/or with exact exchange, quantum Monte Carlo, etc., generating a sufficient amount of data for training a ML potential has remained computationally challenging due to their high cost. In this work, we demonstrate that this issue can be largely alleviated with Deep Kohn-Sham (DeePKS), a ML-based DFT model. DeePKS employs a computationally efficient neural network-based functional model to construct a correction term added upon a cheap DFT model. Upon training, DeePKS offers closely-matched energies and forces compared with high-level QM method, but the number of training data required is orders of magnitude less than that required for training a reliable ML potential. As such, DeePKS can serve as a bridge between expensive QM models and ML potentials: one can generate a decent amount of high-accuracy QM data to train a DeePKS model, and then use the DeePKS model to label a much larger amount of configurations to train a ML potential. This scheme for periodic systems is implemented in a DFT package ABACUS, which is open-source and ready for use in various applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Wenfei Li (15 papers)
  2. Qi Ou (9 papers)
  3. Yixiao Chen (25 papers)
  4. Yu Cao (129 papers)
  5. Renxi Liu (7 papers)
  6. Chunyi Zhang (16 papers)
  7. Daye Zheng (7 papers)
  8. Chun Cai (10 papers)
  9. Xifan Wu (44 papers)
  10. Han Wang (420 papers)
  11. Mohan Chen (53 papers)
  12. Linfeng Zhang (160 papers)
Citations (6)