Papers
Topics
Authors
Recent
2000 character limit reached

Gaussian Approximation Potential Model

Updated 11 November 2025
  • Gaussian Approximation Potential (GAP) is a machine-learned interatomic potential that combines quantum-level accuracy with classical efficiency using sparse Gaussian process regression and SOAP descriptors.
  • It decomposes total energy into sums over local atomic contributions, enabling systematic improvements, built-in uncertainty quantification, and active learning for robust simulations.
  • GAP has driven innovations in interatomic potentials, bridging to advanced frameworks like ACE and MACE, and is applied in materials science, alloys, and molecular simulations.

The Gaussian Approximation Potential (GAP) model is a machine-learned interatomic potential, originally formulated to combine the accuracy of quantum mechanical (QM) methods with the computational efficiency of classical potentials for atomic-scale simulations. GAP achieves this by employing sparse Gaussian process regression over local atomic environments, encoded via explicit, physically motivated descriptors such as the Smooth Overlap of Atomic Positions (SOAP). This framework is systematically improvable, with transferability, uncertainty quantification, and rigorous control over accuracy and computational cost. GAP has driven methodological developments for ML-based interatomic potentials and remains foundational for modern extensions such as ACE and MACE (Bernstein, 2024).

1. Mathematical Foundations and Functional Form

GAP assumes that the total potential energy of a system of NN atoms can be decomposed as a sum over local atomic energies, each depending on the chemical environment of the atom: Etotal=i=1Nε(Xi)(1)E_{\mathrm{total}} = \sum_{i=1}^N \varepsilon(\mathcal{X}_i) \tag{1} where ε\varepsilon is a function defined on the local environment Xi\mathcal{X}_i.

Descriptors: The SOAP Formalism

A leading realization of Xi\mathcal{X}_i is the Smooth Overlap of Atomic Positions (SOAP). The atomic neighbor density for atom ii,

ρi(r)=jneighborserrij2/(2σ2),\rho_i(\mathbf{r}) = \sum_{j \in \mathrm{neighbors}} e^{-|\mathbf{r} - \mathbf{r}_{ij}|^2/(2\sigma^2)},

is expanded in radial functions Rn(r)R_n(r) and spherical harmonics Ym(r^)Y_{\ell m}(\hat{\mathbf{r}}). The expansion coefficients,

cnmi=dr  Rn(r)Ym(r^)ρi(r),c^i_{n \ell m} = \int d\mathbf{r}\; R_n(r)\, Y_{\ell m}(\hat{\mathbf{r}})\, \rho_i(\mathbf{r}),

yield rotationally invariant "power spectrum" components,

pnni=m=(cnmi)cnmi(2)p^i_{n n' \ell} = \sum_{m=-\ell}^\ell \left( c^i_{n \ell m} \right)^* c^i_{n' \ell m} \tag{2}

which collectively comprise the SOAP descriptor for atom ii.

The similarity between two environments, Xi\mathcal{X}_i and Xj\mathcal{X}_j, is quantified by the SOAP kernel,

kSOAP(Xi,Xj)=(nnpnnipnnj)ζ(3)k_\mathrm{SOAP}(\mathcal{X}_i,\mathcal{X}_j) = \Bigl( \sum_{n n' \ell} p^i_{n n' \ell}p^j_{n n' \ell} \Bigr)^\zeta \tag{3}

with ζ\zeta an integer (commonly 1 or 2). Kernel hyperparameters include the Gaussian width σ\sigma, cutoff radius rcutr_\mathrm{cut}, basis truncations nmax,maxn_\mathrm{max}, \ell_\mathrm{max}, and the exponent ζ\zeta.

Gaussian Process Regression and Sparsification

The atomic energy function ε(X)\varepsilon(\mathcal{X}) is modeled as a Gaussian process regression over a sparse set of MM representative environments: ε(X)=m=1Mαmk(X,Xm)(4)\varepsilon(\mathcal{X}) = \sum_{m=1}^M \alpha_m\, k(\mathcal{X}, \mathcal{X}_m) \tag{4} where the αm\alpha_m are coefficients to be determined, and kk is a positive-definite covariance kernel (SOAP or otherwise). Sparse approximations are used (with MNtrainM \ll N_\text{train}), and the regression is regularized to avoid overfitting and to account for noise in the training data.

The regression weights are computed by solving: $\alpha = (K_{MM} + \Sigma)^{-1} K_{MN} y \tag{5}$ where KMMK_{MM} and KMNK_{MN} are Gram matrices among the representative and full sets, Σ\Sigma is a regularization or "noise" matrix, and yy is the vector of training labels (energies, forces, etc.).

2. Model Construction, Sparsification, and Training

The construction of GAP models comprises several key steps:

  • Training Dataset Assembly: A set of configurations with reference energies, forces, and (optionally) virials is assembled from QM calculations. The diversity and coverage of this dataset are decisive for interpolation accuracy and transferability.
  • Descriptor Evaluation: For each atom in each training configuration, invariant descriptors such as SOAP power spectra are evaluated.
  • Representative Environment Selection: A subset of MM representative atomic environments is chosen. Common strategies include farthest-point sampling in descriptor space, matrix-decomposition methods (e.g., CUR, QR pivoting), and active-learning criteria leveraging GPR variance.
  • Matrix Assembly and Linear Solve: Gram matrices KMMK_{MM} and KNMK_{NM} (or KMNK_{MN}) are assembled for the representatives and the full training set. The regularized system (Eq. 5) is solved for αm\alpha_m.
  • Hyperparameter Optimization: Kernel and regularization hyperparameters are tuned—frequently by maximizing the GP log-marginal likelihood or by Bayesian optimization, with support for automatic relevance determination (ARD) priors on the weights.

Sparsification is critical for scaling, since direct regression over all environments is computationally prohibitive (O(N3)O(N^3)). Sparse GPs reduce training scaling to O(NM2+M3)O(N M^2 + M^3) and prediction scaling to O(NM)O(N M).

3. Algorithmic Extensions and Descriptor Innovations

SOAP-turbo and Descriptor Speedup

SOAP-turbo replaces the isotropic Gaussians in standard SOAP with factorized radial and angular Gaussians, reducing kernel evaluation cost by 10×\sim10\times at comparable accuracy.

Tensor-Reduced Elemental Embeddings

Standard SOAP kernels scale as O(Nelem2)O(N_\mathrm{elem}^2), with NelemN_\mathrm{elem} the number of chemical species. By recasting the SOAP power spectrum in the formal language of the Atomic Cluster Expansion (ACE), one can apply tensor-reduction techniques to collapse the basis size, reducing or eliminating explicit element-pair scaling and enabling near element-independent cost.

Body-Order Control

Early GAP implementations used only 2- and 3-body SOAP kernels. Subsequent extensions have included explicit 4-body terms or, conversely, restricted to lower body order for speed. This provides a tunable tradeoff between accuracy and computational cost.

Hyperparameter and Uncertainty Control

Hyperparameter optimization (log-marginal likelihood maximization, ARD priors, etc.) and on-the-fly active-learning (adaptive sampling, Δ\Delta-learning) have been developed for enhanced model reliability and targeted data acquisition during molecular dynamics.

4. Comparison with ACE and MACE Methods

The development of GAP catalyzed the emergence of atomic cluster expansion (ACE) and multilayer neural network ACE (MACE) frameworks.

  • ACE represents atomic energies as a linear expansion in rotationally invariant polynomials Ai,vA_{i,v}, systematically including higher body-orders. This yields:

Etotal=ivcvAi,v(6)E_\mathrm{total} = \sum_i \sum_v c_v\, A_{i,v} \tag{6}

The number of cvc_v grows combinatorially with body order and chemical diversity, mitigated by tensor-reduction analogous to modern SOAP embeddings.

  • MACE (Multilayer ACE) fuses the ACE basis with equivariant message-passing neural networks, allowing the network to build higher-level representations and achieve accuracy beyond linear ACE or vanilla GAP, at the expense of GPU-centric graph-convolutional evaluation.
  • Accuracy–Cost Tradeoff:
    • GAP (with SOAP-turbo): 1\sim1–3 ms/atom/step on CPU; moderate accuracy.
    • ACE: 0.05\sim0.05–0.2 ms/atom/step on CPU; similar or slightly better accuracy (with high body order).
    • MACE: 0.04\sim0.04–0.12 ms/atom/step on GPU; highest accuracy, supports multi-element "foundation models".

5. Performance, Limitations, and Practical Considerations

Performance Strengths

  • Invariance: SOAP-GAP is by construction exactly rotationally, translationally, and permutationally invariant.
  • Smoothness and Bayesian Framework: The use of smooth, squared-exponential kernels yields well-behaved, regular fits and built-in uncertainty quantification via GPR predictive variances.
  • Systematic Improvability: Increasing database size, descriptor dimensionality, or kernel flexibility yields steady accuracy enhancement.
  • Bayesian Uncertainty and Error Detection: GPR variance can serve as an indicator of model confidence and extrapolative prediction.

Limitations

  • Computational Cost of Prediction: The cost is proportional to the number of sparse points (M103M\sim10^310410^4).
  • Scaling with Element Types: Without tensor-reduction, descriptor cost scales as O(Nelem2)O(N_\mathrm{elem}^2).
  • Variance Underestimation: GPR variance may underestimate extrapolation error in low-data regimes.
  • Training Data Requirements: Construction of a single-system GAP generally requires O(103O(10^3104)10^4) DFT reference configurations.
  • Cutoff-Related Limitations: Interaction range is restricted by rcutr_\mathrm{cut}, although hybridization with long-range models remains an area of active exploration.

Ongoing and Future Developments

Key vectors of current research and development in the GAP ecosystem include:

  • Descriptor Acceleration: SOAP-turbo and tensor-reduced embeddings for handling larger, more chemically diverse systems.
  • Active-Learning Methods: Enhanced criteria for on-the-fly data acquisition and uncertainty-driven sampling.
  • Hybrid and Hierarchical Models: Δ\Delta-GAP approaches on top of linear ACE, combining strengths of multiple frameworks.
  • Scalable and Transferable Architectures: Deployment of MACE "foundation models" and transfer learning to enable efficient retraining for new chemical systems.
  • Scalable Parallel Implementations: MPI and ScaLAPACK-based domain-decomposed fitting infrastructures support fits on databases with 10410^410510^5 environments and M103M \approx 10^310410^4 sparse points (Klawohn et al., 2023, Klawohn et al., 2022).

6. Impact, Applications, and Research Frontiers

GAP models have demonstrated robust, near-DFT-level accuracy across a wide range of materials and applications:

  • Elemental and Alloy Systems: Accurate, transferable potentials for semiconductors, transition metals, and complex alloys.
  • Surfaces and Defects: Capable of capturing surfaces, vacancies, stacking faults, and dislocation energies.
  • Molecular Materials: Used in modeling bond breaking/forming and reactive events in hydrocarbons, CH systems, and amorphous materials.
  • Phonon and Thermodynamic Properties: Predicting vibrational spectra and thermal properties to exceptional accuracy.
  • Hybrid Modeling: Ongoing integration with long-range electrostatics, dispersion corrections, and hybrid frameworks.
  • Data-driven Discovery: The inclusion of reliable variance estimation supports active-learning and uncertainty-driven search strategies.

The GAP methodology's systematic improvability, modularity, and strong physical motivation continue to influence the development of new ML interatomic potential frameworks, while its limitations (evaluation cost, scaling, training data requirements) drive research into faster, more scalable, and more transferable alternatives (Bernstein, 2024).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Gaussian Approximation Potential (GAP) Model.