Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Theoretical Case Study of the Generalisation of Machine-learned Potentials (2311.01664v1)

Published 3 Nov 2023 in physics.comp-ph, cs.NA, and math.NA

Abstract: Machine-learned interatomic potentials (MLIPs) are typically trained on datasets that encompass a restricted subset of possible input structures, which presents a potential challenge for their generalization to a broader range of systems outside the training set. Nevertheless, MLIPs have demonstrated impressive accuracy in predicting forces and energies in simulations involving intricate and complex structures. In this paper we aim to take steps towards rigorously explaining the excellent observed generalisation properties of MLIPs. Specifically, we offer a comprehensive theoretical and numerical investigation of the generalization of MLIPs in the context of dislocation simulations. We quantify precisely how the accuracy of such simulations is directly determined by a few key factors: the size of the training structures, the choice of training observations (e.g., energies, forces, virials), and the level of accuracy achieved in the fitting process. Notably, our study reveals the crucial role of fitting virials in ensuring the consistency of MLIPs for dislocation simulations. Our series of careful numerical experiments encompassing screw, edge, and mixed dislocations, supports existing best practices in the MLIPs literature but also provides new insights into the design of data sets and loss functions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yangshuai Wang (23 papers)
  2. Shashwat Patel (1 paper)
  3. Christoph Ortner (91 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.