Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 104 tok/s
Gemini 3.0 Pro 36 tok/s Pro
Gemini 2.5 Flash 133 tok/s Pro
Kimi K2 216 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Exa-AMD: Exascale Materials Simulation

Updated 8 October 2025
  • Exa-AMD is an open-source, exascale-ready simulation framework designed for AI-assisted materials discovery through automated workflow integration.
  • It employs modular task-based parallelization and robust data management to generate, screen, and compute candidate crystal structures using ML and DFT.
  • Benchmarking demonstrates near-ideal strong scaling with significant reductions in ML screening and DFT relaxation times on CPU and GPU clusters.

exa-AMD is an open-source, exascale-ready simulation framework for high-throughput, AI-assisted materials discovery. It implements modular workflow automation, task-based parallelization, and integrated data management to support the generation, screening, and first-principles computation of candidate crystal structures, with benchmarked scalability on modern supercomputers (Xiaa et al., 1 Oct 2025).

1. System Architecture and Workflow Overview

exa-AMD provides an automated pipeline for accelerated materials design, starting from user-specified elemental pools and crystal templates (typically in CIF format). The system generates hypothetical crystal structures via automated substitution, combinatorial shuffling, and lattice volume scaling. Candidate structures are subjected to rapid machine learning-based stability screening, followed by prioritized first-principles calculations and automatic phase diagram construction. Results—including relaxed structures and formation energies—are distributed along with updated convex hulls and tabulated properties.

The workflow is decomposed into discrete parallelizable tasks (“apps”), managed and scheduled via Parsl, a Python-based task-parallel programming library enabling scalable, fault-tolerant orchestration across heterogeneous compute platforms. exa-AMD leverages both distributed inter-node parallelism and intra-node concurrency, maintaining flexibility for CPU-only and GPU-accelerated systems.

2. Technical Implementation and Parallelization Strategies

exa-AMD is implemented in Python, utilizing Parsl for elastic resource management and dynamic task scheduling. The architecture supports:

  • Task-based modularization: Structure generation, ML screening, DFT relaxation, convex hull analysis, and output routines are encapsulated as separable modules, facilitating user-customization and replacement (e.g., alternate ML models or DFT engines).
  • Optimized data orchestration: Jobs are scheduled with built-in fault tolerance, elasticity, and resumability, and computational resources (CPUs, GPUs) are provisioned dynamically per workflow stage. ML inference is offloaded to GPUs for model screening, while DFT jobs exploit high-throughput parallelism on clusters.
  • Strong scaling: Near-ideal linear reduction in wall time with node count increase, verified empirically across both CPU and GPU-based supercomputers.

3. Machine Learning Screening and Quantum Calculations

Stability pre-screening is performed via a Crystal Graph Convolutional Neural Network (CGCNN). This model predicts formation energies from crystal graph representations, applying graph convolutions of the form

hi(l+1)=σ(jN(i)W(l)hj(l)+b(l))h_i^{(l+1)} = \sigma\left(\sum_{j \in \mathcal{N}(i)} W^{(l)} h_j^{(l)} + b^{(l)} \right)

where hi(l)h_i^{(l)} is the node feature vector at layer ll, N(i)\mathcal{N}(i) is the set of atomic neighbors, W(l)W^{(l)} and b(l)b^{(l)} are learnable weights/bias, and σ\sigma is the nonlinearity. Candidates with low predicted formation energy are retained.

Shortlisted structures undergo first-principles density functional theory (DFT) calculations (default: VASP; alternate engines supported), with formation energies computed as

Ef=EcompoundixiEirefE_{f} = E_{\text{compound}} - \sum_{i} x_{i} E_{i}^{\text{ref}}

where EcompoundE_{\text{compound}} is the total energy of the unit cell, EirefE_{i}^{\text{ref}} are elemental reference energies, and xix_i are atomic stoichiometries. Thermodynamic stability is assessed relative to the convex hull.

4. Benchmarking and Performance Analysis

exa-AMD demonstrates substantial scalability in benchmark tests:

System 1-node Time (min) 32-node Time (min) Parallel Efficiency
Na–B–C (GPU) 1550 88 >80%
Na–B–C (CPU) 1520 98 >80%
Ce–Co–B (GPU) 2112 50 >80%
Fe–Co–Zr Similar scaling Similar scaling >80%

Strong scaling is near-ideal; million-candidate ML screenings are executed in minutes and high-throughput DFT relaxations are completed within hours on tens to hundreds of nodes. GPU runtimes outperform CPU runs, with parallel efficiency remaining high as node count increases.

5. Applications in Functional Materials Discovery

The framework is validated on heterogeneous systems including Fe–Co–Zr (rare-earth-free magnet design), Ce–Co–B, and Na–B–C (phase space exploration):

  • Fe–Co–Zr: From ~900,000 generated structures, ML screening reduced the set to ∼3,100 DFT-evaluated candidates, revealing nine new stable phases and 81 metastable compounds (≤0.1 eV/atom above convex hull).
  • Na–B–C: The pipeline generated and evaluated new candidate structures, updating phase diagrams with metastable phases colored by energy proximity to the hull.

Automated post-processing delivers relaxed structures, updated phase diagrams, and property tables.

6. Modular Design, Extensibility, and Community Resources

exa-AMD’s modularity allows practitioners to:

  • Substitute ML models or DFT solvers without disrupting workflow logic.
  • Customize or extend structure pool inputs (CIF library).
  • Use plugin scripts for data handling, workflow orchestration, and job submission.
  • Access open-source code, documentation, reproducible test cases, and tutorials at https://github.com/ml-AMD/exa-amd/.

7. Impact and Future Directions

Exa-AMD, through exascale-oriented design and automation, shifts materials discovery campaigns from timescales of months to hours or days. The integration of adaptive genetic algorithms and advanced ML potentials (as mentioned in future work plans) will expand the design space to search for novel crystal motifs and facilitate even more efficient structure optimization.

These developments, along with open community engagement and documented test cases, position exa-AMD as a robust platform for exploring complex composition-structure-property relations and as a foundation for exascale-enabled functional materials discovery (Xiaa et al., 1 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Exa-AMD.