Papers
Topics
Authors
Recent
2000 character limit reached

Nequix MP: Efficient E(3)-Equivariant Potential

Updated 13 January 2026
  • Nequix MP is an E(3)-equivariant interatomic potential designed for efficient and accurate atomistic simulations.
  • It employs a streamlined message-passing architecture with equivariant RMS normalization and species-independent skip connections.
  • Nequix MP delivers near state-of-the-art accuracy with significantly lower computational cost compared to larger neural potentials.

Nequix MP (Message Passing) is an E(3)-equivariant interatomic potential tailored for computationally efficient and accurate modeling of atomistic systems. Designed as a compact, reproducible, and high-throughput alternative to existing symmetry-adapted neural potentials, Nequix MP merges a pared-down equivariant message-passing architecture with modern normalization and optimization methodologies to achieve near state-of-the-art accuracy in materials science tasks at a fraction of the traditional computational overhead (Koker et al., 22 Aug 2025).

1. Architectural Overview

At its core, Nequix MP follows the architectural paradigm established by NequIP, employing message-passing neural network layers that are E(3)-equivariant under the full Euclidean group of three-dimensional space. The model consists of four message-passing layers, each operating up to angular frequency cutoff Lmax=3L_{\max} = 3, interleaved with residual self-connections and species-independent linear skip connections. Feature tensors hi(l,m)h_i^{(l,m)} are associated with each atom ii, indexed by irreducible representation degree ll (0l30 \le l \le 3) and projection mm. Message-passing proceeds by aggregating information from neighboring atoms, applying Clebsch–Gordan contractions, and updating atomic features in an equivariant manner.

In the forward pass:

  1. Neighbor aggregation computes atomic messages using irreducible representation-coupled sums, learned radial basis functions of order 8 (with polynomial cutoff p=6p = 6), and real spherical harmonics for orientation encoding.
  2. Feature updates apply equivariant root-mean-square normalization (RMSNorm) and a species-independent multilayer perceptron (MLP) to each update.
  3. Readout retains only the scalar (l=0l=0) components for energy prediction, with total system energy expressed as E=iwhi(0)+bE = \sum_i w^\top h_i^{(0)} + b.

Forces and stresses are computed by automatic differentiation: Fi=riE,σ=V1Eε\mathbf{F}_i = -\nabla_{\mathbf{r}_i} E,\quad \sigma = V^{-1}\frac{\partial E}{\partial \varepsilon} where ε\varepsilon denotes strain and VV is the cell volume.

2. Equivariant RMS Layer Normalization and Optimization

To enhance training stability and reduce channel variance intrinsic to equivariant representations, Nequix MP includes an equivariant RMSNorm operation after each residual update. The normalization is performed per irrep ll: σi(l)=12l+1m=llhi(l,m)2+ϵ\sigma_i^{(l)} = \sqrt{\frac{1}{2l+1} \sum_{m=-l}^l \| h_i^{(l,m)} \|^2 + \epsilon} The normalized feature is then

hˉi(l,m)=hi(l,m)σi(l)γ(l)+β(l)\bar{h}_i^{(l,m)} = \frac{h_i^{(l,m)}}{\sigma_i^{(l)}} \gamma^{(l)} + \beta^{(l)}

where γ(l)\gamma^{(l)} and β(l)\beta^{(l)} are learnable parameters. Because σi(l)\sigma_i^{(l)} is a scalar under E(3), this preserves equivariance.

Optimization leverages the Muon optimizer, replacing Adam. Muon includes a momentum-inspired update and applies an approximate matrix-inverse-square-root step, utilizing Newton–Schulz iterations to orthogonalize weight updates. This combination leads to faster convergence and reduced validation errors in energy and stress.

3. Model Complexity and Computational Efficiency

Nequix MP comprises approximately 708,000 parameters and was benchmarked as requiring 500 A100-GPU hours for full training on the MPtrj dataset (100 epochs, 4×A100 for 125 wall-clock hours). This is significantly lower than leading alternatives; for example, Eqnorm MPtrj (1.31M parameters) required ~2000 GPU hours, HIENet (7.51M) needed ~2888 GPU hours, and MACE-MP-0 (4.69M) required ~2600 GPU hours.

The architectural design and small parameter count yield an order of magnitude faster inference compared to larger, top-ranked models. On a 32-atom test cell, Nequix MP achieved ~50,000 steps/day, with the current top eSEN model achieving only ~5,000 steps/day (Koker et al., 22 Aug 2025).

4. Empirical Benchmarks and Performance

Nequix MP has been evaluated on several challenging materials science benchmarks:

Benchmark Metric Score (Nequix) Comparison
Matbench-Discovery RMSD (Å) 0.085 eSEN-30M-MP: 0.797 (CPS-1, 30.1M params)
κSRME\kappa_{\rm SRME} 0.446 Eqnorm MPtrj: 0.756 (CPS-1, 1.31M params)
F1 0.750
CPS-1 0.729
MDR phonon benchmark MAE(ωmax\omega_{\max}, K) 26
MAE(SS, J K1^{-1} mol1^{-1}) 33
MAE(FF, kJ mol1^{-1}) 12
MAE(CVC_V, J K1^{-1} mol1^{-1}) 6

Nequix MP consistently ranked within the top three on both benchmarks, requiring less than one quarter of the computational cost of many competing methods.

5. Implementation and Reproducibility

The Nequix MP codebase is implemented in JAX with the Equinox library and is fully open source. The repository, installation scripts, pretrained weights, and complete reproducibility workflows are provided at https://github.com/atomicarchitects/nequix. Loading pretrained models, single-system relaxations, and complete retraining are all documented and script-driven to ensure reproducibility.

The workflow supports rapid deployment for energy and force predictions in atomistic systems, as well as end-to-end training on new datasets using the provided configuration files and scripts.

6. Context and Impact

Nequix MP demonstrates that aggressive architectural pruning and advanced training techniques allow for the construction of highly efficient E(3)-equivariant interatomic potentials without substantial compromise on predictive accuracy across diverse datasets, including materials screening and phonon-property prediction (Koker et al., 22 Aug 2025). By achieving high accuracy at low computational cost, Nequix MP lowers barriers to entry for research groups with limited hardware, accelerates broad benchmarking efforts, and enables high-throughput atomistic simulation workflows.

A plausible implication is that Nequix MP—and comparable compact, equivariant models—may become foundational components for future scalable, open-source materials modeling stacks, aligning with the trend toward democratization of foundation models in atomistic machine learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Nequix MP.