Dice Question Streamline Icon: https://streamlinehq.com

Identify an architecture that enables scaling laws in large atomistic models

Determine a specific model architecture for large atomistic models that demonstrably exhibits scaling laws, namely predictable improvements in generalizability as model size, training dataset size, and compute budget are increased.

Information Square Streamline Icon: https://streamlinehq.com

Background

Large atomistic models seek to universally represent the ground-state potential energy surface as defined by density functional theory. A central hypothesis guiding their development is the existence of scaling laws: generalizability should consistently improve with larger model size, expanded training datasets, and increased computational budgets.

Although some prior works have shown partial evidence for scaling laws, recent studies highlight non-trivial challenges—especially for graph neural networks where oversmoothing can hinder performance—leaving unanswered which specific architectural choices ensure scaling-law behavior in practice. This paper introduces DPA3 and empirically validates scaling relationships, but the general architectural question motivating the field remains explicitly open.

References

The specific model architecture through which a LAM may exhibit these scaling laws remains an open question.

A Graph Neural Network for the Era of Large Atomistic Models (2506.01686 - Zhang et al., 2 Jun 2025) in Introduction (Section 1)