Dice Question Streamline Icon: https://streamlinehq.com

Interpretability of machine learning interatomic potentials

Develop interpretability frameworks for machine learning interatomic potentials that clarify how input representations (fixed descriptors or learned graph-based features) relate to predicted physical quantities (energies, forces), and establish the physical meaning and reliability of intermediate constructs such as atomic energy contributions.

Information Square Streamline Icon: https://streamlinehq.com

Background

The tutorial discusses that atomic energy contributions in high-dimensional neural network potentials are not physical quantities but fitting constructs, and that different initialisations can lead to different partitions, complicating physical interpretation. More broadly, the field employs complex models whose internal workings are not easily mapped to physical insights.

In the closing section, the authors explicitly identify interpretability as an open question, highlighting the need for methods that make machine-learning potentials more transparent and physically meaningful.

References

There are still many open questions and challenges to be addressed, such as the long-range interactions, generalisation and interpretability.

Introduction to machine learning potentials for atomistic simulations (2410.00626 - Thiemann et al., 1 Oct 2024) in Summary and Outlook (Section 8)