Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 231 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4 33 tok/s Pro
2000 character limit reached

Surface Stability Modeling with Universal Machine Learning Interatomic Potentials: A Comprehensive Cleavage Energy Benchmarking Study (2508.21663v1)

Published 29 Aug 2025 in cond-mat.mtrl-sci, cs.LG, and physics.comp-ph

Abstract: Machine learning interatomic potentials (MLIPs) have revolutionized computational materials science by bridging the gap between quantum mechanical accuracy and classical simulation efficiency, enabling unprecedented exploration of materials properties across the periodic table. Despite their remarkable success in predicting bulk properties, no systematic evaluation has assessed how well these universal MLIPs (uMLIPs) can predict cleavage energies, a critical property governing fracture, catalysis, surface stability, and interfacial phenomena. Here, we present a comprehensive benchmark of 19 state-of-the-art uMLIPs for cleavage energy prediction using our previously established density functional theory (DFT) database of 36,718 slab structures spanning elemental, binary, and ternary metallic compounds. We evaluate diverse architectural paradigms, analyzing their performance across chemical compositions, crystal systems, thickness, and surface orientations. Our results reveal that training data composition dominates architectural sophistication: models trained on the Open Materials 2024 (OMat24) dataset, which emphasizes non-equilibrium configurations, achieve mean absolute percentage errors below 6% and correctly identify the thermodynamically most stable surface terminations in 87% of cases, without any explicit surface energy training. In contrast, architecturally identical models trained on equilibrium-only datasets show five-fold higher errors, while models trained on surface-adsorbate data fail catastrophically with a 17-fold degradation. Remarkably, simpler architectures trained on appropriate data achieve comparable accuracy to complex transformers while offering 10-100x computational speedup. These findings show that the community should focus on strategic training data generation that captures the relevant physical phenomena.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 9 likes.