Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

All-atom Diffusion Transformers: Unified generative modelling of molecules and materials (2503.03965v2)

Published 5 Mar 2025 in cs.LG and cs.AI

Abstract: Diffusion models are the standard toolkit for generative modelling of 3D atomic systems. However, for different types of atomic systems -- such as molecules and materials -- the generative processes are usually highly specific to the target system despite the underlying physics being the same. We introduce the All-atom Diffusion Transformer (ADiT), a unified latent diffusion framework for jointly generating both periodic materials and non-periodic molecular systems using the same model: (1) An autoencoder maps a unified, all-atom representations of molecules and materials to a shared latent embedding space; and (2) A diffusion model is trained to generate new latent embeddings that the autoencoder can decode to sample new molecules or materials. Experiments on MP20, QM9 and GEOM-DRUGS datasets demonstrate that jointly trained ADiT generates realistic and valid molecules as well as materials, obtaining state-of-the-art results on par with molecule and crystal-specific models. ADiT uses standard Transformers with minimal inductive biases for both the autoencoder and diffusion model, resulting in significant speedups during training and inference compared to equivariant diffusion models. Scaling ADiT up to half a billion parameters predictably improves performance, representing a step towards broadly generalizable foundation models for generative chemistry. Open source code: https://github.com/facebookresearch/all-atom-diffusion-transformer

Summary

All-atom Diffusion Transformers: Unified Generative Modelling of Molecules and Materials

The paper presents an innovative approach in the domain of generative modelling by introducing the All-atom Diffusion Transformer (ADiT). This model advances the generative performance of molecular and material structures, bridging the gap between disparate systems like periodic materials and non-periodic molecular systems through a unified framework.

Framework Details

ADiT operates through a two-stage process: initially, a Variational Autoencoder (VAE) is employed to encode a shared latent representation across all-atom configurations of molecules and crystals. This VAE effectively abstracts the atomic system into a latent space, capturing both categorical and continuous atomic attributes. Subsequently, a Diffusion Transformer model is trained on this latent space, employing Gaussian diffusion to progressively refine random noise into target molecular or crystalline structures. This generative process benefits from classifier-free guidance, optimizing the model's capacity to generate valid atomic constructs.

Technical Approach

The strength of ADiT lies in its ability to efficiently process atomic systems without separate handling for categorical and continuous data through multiple generative processes, unlike traditional methods that apply independent diffusion models to each modality. The employment of standard Transformers allows for scalability in model training and inference, minimizing computational demands traditionally incurred by equivariant models which conform to symmetry constraints.

The model is scaled up to 500 million parameters, demonstrating enhanced performance proportional to the increase in model size — reinforcing the framework's relevance in developing broadly generalizable models for generative tasks in chemistry.

Results

Experiments using QM9 (for molecules) and MP20 (for materials) datasets affirm that ADiT competently generates high-fidelity molecular and material samples, surpassing state-of-the-art models tailored specifically for either domain. Notably, ADiT's ability to handle both periodic and non-periodic systems in a unified manner enhances transfer learning capabilities — evident from improved validity and stability metrics when trained jointly on both datasets, revealing synergy in learning across system boundaries.

The numerical results reported underscore ADiT’s capacity. For instance, ADiT achieves a stable, unique, and novel (S.U.N.) rate improvement of 25% over the previous best models, highlighting significant efficacy in generating novel chemical structures, which are verified using Density Functional Theory (DFT) for stability and uniqueness.

Implications and Future Directions

This paper’s contributions have substantial theoretical and practical implications. Theoretically, ADiT proposes a paradigm shift in understanding and modelling atomic systems through latent space diffusion, thereby providing a robust foundation for future research targeted at scalable generative models that unify different atomic systems. Practically, such frameworks can expedite advancements in materials science and drug discovery, providing high-quality generative models that can aid in the design and synthesis of novel compounds.

Furthermore, extending this unified approach to incorporate a broader range of atomic systems, like biomolecular complexes, could deepen the impact of this methodology. Future work could explore scaling ADiT with larger datasets to enhance its practical applicability in discovering materials with desired properties or functional groups, pushing the limits of inverse design in chemistry.

The All-atom Diffusion Transformer stands as a significant contribution to the field, showcasing both innovation in model design and potential for broad application. This unified approach paves the way for generatively bridging diverse atomic systems, fostering a more integrated and universal understanding of atomic interactions and their generative modelling.

Youtube Logo Streamline Icon: https://streamlinehq.com