Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
138 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

MACE: Message Passing Atomic Cluster Expansion

Updated 25 July 2025
  • Message Passing Atomic Cluster Expansion (MACE) is a framework that combines symmetry-adapted atomic cluster expansion with message passing to accurately represent complex many-body interactions in atomic systems.
  • It leverages higher-order message passing and equivariant graph neural networks to capture local and semilocal effects, improving predictive accuracy with fewer layers.
  • MACE demonstrates high efficiency and scalability, enabling fast, physically faithful simulations in materials science, chemistry, and large-scale molecular dynamics.

Message Passing Atomic Cluster Expansion (MACE) is a framework for constructing fast, accurate, and physically principled machine learning models of interatomic potentials, extensively deployed in computational materials science, chemistry, and molecular simulation. MACE fuses the systematic, symmetry-adapted basis of the atomic cluster expansion (ACE) with equivariant message passing through graph neural networks, resulting in a versatile architecture capable of representing complex many-body interactions and capturing both local and semilocal effects in atomic systems.

1. Theoretical Foundations

MACE is rooted in the atomic cluster expansion (ACE), a formalism in which the potential energy surface of an atomic system is represented as a systematically convergent linear or nonlinear combination of body-ordered, symmetry-adapted basis functions. In the traditional ACE, the local atomic environment is encoded through a set of concatenated atom-centered density correlations, which, when symmetrized over permutation and rotation, yield invariant or equivariant features for regression tasks (Nigam et al., 2022, Bernstein, 8 Oct 2024).

The message passing extension of ACE yields MACE, wherein the same body-ordered descriptors are updated iteratively using graph neural networks. Each atom becomes a node, its environment a local cluster, and the update of its features is performed via a series of message-passing steps that aggregate information from neighbors, thus efficiently capturing high-order correlations and long-range effects (Batatia et al., 2022, Bochkarev et al., 2023).

Mathematically, a typical MACE layer updates atomic features hi()h_i^{(\ell)} via

hi(+1)=f(hi(),jN(i)M(hi(),hj(),φ(rij)))h_i^{(\ell+1)} = f\left(h_i^{(\ell)}, \sum_{j \in \mathcal{N}(i)} M(h_i^{(\ell)}, h_j^{(\ell)}, \varphi(r_{ij}))\right)

where M()M(\cdot) is a learnable message function incorporating radial and angular features (often via spherical harmonics or irreducible Cartesian tensors), and ff is a nonlinear update function (Bernstein, 8 Oct 2024, Kang, 1 May 2024, Zaverkin et al., 23 May 2024).

2. Core Architecture and Message Passing

The distinctive feature of MACE is its use of higher-order message passing, where each message is not confined to pairwise (two-body) relationships but also encodes genuine many-body geometric features. MACE constructs messages using body-ordered tensor products expanded in a symmetry-adapted basis, typically truncated at fourth body order for practical applications (Batatia et al., 2022).

The three main computational phases in each layer are:

  1. Computing atomic basis features—Radial and angular information is expanded via radial basis functions and spherical harmonics (or, in recent variants, irreducible Cartesian tensors), generating equivariant features for each node.
  2. Constructing higher-body features—Symmetrized tensor products (Clebsch–Gordan contraction or Cartesian analogues) yield many-body representations.
  3. Aggregating and updating—A learnable linear combination of these features forms the message, which is then used to update the atomic features via simple residual connections.

Higher-body (e.g., four-body) messages allow MACE to reach requisite expressivity in as few as two message-passing layers, contrasting with the deeper stacks required for older two-body MPNNs (Batatia et al., 2022, Bernstein, 8 Oct 2024). This structure improves both efficiency and parallelizability.

3. Symmetry Adaptation and Tensor Representations

MACE enforces invariance and equivariance to the appropriate spatial symmetries (translations, rotations, and, where required, permutations). Original MACE and most closely related equivariant message-passing networks utilize spherical harmonics and Clebsch–Gordan contractions. Recent developments propose the use of higher-rank irreducible Cartesian tensors, which offer equivalent symmetry adaptation while reducing computational overhead and simplifying implementation (Zaverkin et al., 23 May 2024, Cheng, 12 Feb 2024).

The equivalence and computational advantages stem from the explicit construction of symmetric, traceless Cartesian tensors and their efficient outer product rules, allowing composition up to L = 4 with linear cost in tensor rank and essentially sidestepping the scaling bottlenecks associated with spherical harmonic coupling.

4. Extensions and Generalizations

Several extensions of the MACE framework have been proposed:

  • Graph ACE: Generalizes MACE to a complete basis of graph-based cluster functions, enabling efficient and transparent modeling of semilocal and many-atom interactions via recursive tensor decompositions and graph-topological representations (Bochkarev et al., 2023).
  • Long-range Interactions (FieldMACE): Couples standard short-range message passing in MACE with explicit multipole expansions and attention mechanisms to rigorously incorporate long-range electrostatics and environmental effects, crucial in both ground and excited-state simulations (Barrett et al., 28 Feb 2025).
  • Physically-Informed Bases: Incorporation of atomic orbital information into the basis (e.g., through optimized radial basis functions as in LCAONet) merges the inductive biases of quantum mechanics with MACE's geometric flexibility, further improving predictive performance (Nishio et al., 8 Feb 2024).
  • Transferable Excited-State Modeling (X-MACE): By integrating Deep Sets autoencoders, MACE can encode arbitrary, non-smooth excited-state potential energy surfaces as smooth invariant latent variables, enabling the faithful modeling of conical intersections and supporting transfer learning between ground and excited states (Barrett et al., 18 Feb 2025).
  • Coarse-Grained Force Fields: Using rigorous graph-coarsening algorithms to define mapping operators in CGMD, MACE reliably models the many-body CG potential with systematically improvable accuracy while preserving underlying spectral properties of the full atomistic system (Mondal et al., 22 Jul 2025).

5. Computational Properties and Scalability

MACE architectures demonstrate high data efficiency (steep learning curves), computational speed, and scalability. The use of high-body order messages reduces the number of layers and model parameters needed for convergence, accelerates inference, and limits the growth of the receptive field, all while preserving high expressivity (Batatia et al., 2022, Bernstein, 8 Oct 2024).

Efficient parallelization strategies have been proposed, notably algorithms that restrict inter-node communication to local neighbors even across multiple message-passing layers, maintaining linear scaling with system size and the number of layers; this is validated in simulations exceeding 100 million atoms (Xia et al., 10 May 2025). These features collectively enable the deployment of MACE-based potentials in large-scale molecular dynamics.

6. Extrapolation, Physical Realism, and Transferability

A key capability of MACE is the ability to extrapolate to unseen domains, such as surfaces, amorphous phases, or long-range electrostatics. This arises from the completeness of its body-ordered expansion and the capacity of its message-passing layers to recombine local geometric information into non-local interactions (e.g., reproducing the universal 1/r behavior of Coulombic forces) (Kang, 1 May 2024).

Transfer learning with MACE foundation models (e.g., MACE-MP0 trained on Materials Project data) enables adaptation to new elements or physical regimes with greatly reduced data requirements. This is further enhanced in architectures like FieldMACE and X-MACE, supporting transfer to environmental effects and excited-state properties (Barrett et al., 28 Feb 2025, Barrett et al., 18 Feb 2025).

7. Applications and Comparative Performance

MACE and its descendants have been applied across multiple domains:

Comparison with previous potentials (ACE, GAP, SchNet, NequIP, BOTNet, etc.) consistently finds MACE capable of achieving equal or superior accuracy, often with smaller models and lower computational cost (Batatia et al., 2022, Bernstein, 8 Oct 2024).


In summary, the Message Passing Atomic Cluster Expansion provides a mathematically rigorous, physically faithful, computationally efficient, and highly generalizable approach to machine-learned modeling of atomic interactions. By unifying the body-ordered expansion of ACE with the expressive power of message-passing neural networks, MACE serves as a foundation for a broad spectrum of atomistic simulations, from highly accurate quantum chemistry to efficient coarse-grained modeling and large-scale molecular dynamics.