Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Diffusion Models with Transformers (2212.09748v2)

Published 19 Dec 2022 in cs.CV and cs.LG
Scalable Diffusion Models with Transformers

Abstract: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.

Introduction

The paper "Scalable Diffusion Models with Transformers" (Peebles et al., 2022 ) explores the viability of replacing the U-Net backbone universally employed in standard latent diffusion models with a transformer architecture. The work presents Diffusion Transformers (DiTs), which handle image generation by processing latent patches analogous to tokens in natural language processing. This approach provides an architecture unification by leveraging the advancements in transformer technology to tackle the scalability and efficiency challenges in generative image synthesis.

Architecture and Model Design

At its core, the DiT architecture draws inspiration from Vision Transformers (ViTs) by adapting the classical transformer paradigm for diffusion-based image synthesis. Key aspects include:

  • Latent Patch Representation: Instead of operating in pixel space, DiTs process latent representations of images. The latent patches serve as input tokens to the transformer, which is critical in leveraging the self-attention mechanism typically seen in LLMs.
  • Adaptive Layer Normalization (adaLN-Zero): The incorporation of adaLN-Zero blocks is a distinct feature. This variant of adaptive layer normalization initializes transformer blocks as identity functions. Such initialization not only ensures smoother early training dynamics but also optimizes the conditioning mechanism required for integrating class labels and noise timesteps.
  • Scalability in Depth and Width: The authors systematically explore scalability by varying transformer depth, width, and the number of input tokens. This results in a direct correlation between the increased forward pass complexity (measured in Gflops) and improved performance, typically quantified by a reduction in Frechet Inception Distance (FID). Essentially, increasing parameters in the transformer network consistently enhances sample fidelity.

Scalability and Numerical Performance

A significant contribution of the work is the detailed analysis of computational scaling. The investigation reveals that:

  • Correlation between Gflops and FID: Models with a higher computational footprint (as quantified by Gflops) exhibit systematically lower FID scores. In particular, the DiT architecture demonstrates that an increased number of layers, enhanced width, and more tokens significantly contribute to the quality of the generated images.
  • State-of-the-Art Benchmarks: The paper reports that the DiT-XL/2 configuration achieves state-of-the-art performance on the class-conditional ImageNet benchmarks at both 512×512 and 256×256 resolutions. Notably, on the 256×256 benchmark, the model reaches an FID of 2.27, which is among the best reported values for diffusion models, challenging the long-established performance advantage of U-Net-based architectures.

These numerical results underscore the scalability potential of transformer-based diffusion models and highlight their competitive efficiency compared to their convolutional counterparts.

Empirical Evaluation and Ablation Studies

The paper provides extensive empirical evidence to substantiate the proposed architecture’s efficacy:

  • Ablation on Conditioning Strategies: Different methods for incorporating conditional information—ranging from in-context conditioning to the use of cross-attention—were evaluated. The experiments demonstrate that the adaLN-Zero block is the most effective for ensuring robust performance without incurring heavy computational penalties.
  • Performance Metrics: In addition to the FID metric, the evaluations cover a range of relevant performance indicators like sFID, Inception Score, Precision, and Recall. This multipronged evaluation reinforces the claim that DiTs maintain competitive performance while enhancing compute efficiency.
  • Training Efficiency: The research indicates that larger DiT models exhibit superior training dynamics. This efficiency is evidenced by the better utilization of computational resources, enabling faster convergence despite the model architecture's increased complexity.

Theoretical and Practical Implications

The integration of a transformer backbone in a diffusion model carries several implications:

  • Architecture Unification: The successful application of transformers in this new context demonstrates that the previously assumed necessity for U-Net inductive biases in diffusion-based models can be overcome. This finding opens avenues for further research where transformer architectures might replace or augment standard convolutional designs in other generative tasks.
  • Scaling Laws in Generative Models: The detailed analysis provides valuable insights into how diffusion models scale with computational resources. This not only aids in optimizing future model designs but also sets benchmarks for effective trade-offs between compute cost and image quality.
  • Cross-Domain Flexibility: Leveraging transformers, known for their versatility in domains like NLP, suggests the potential for cross-modal applications. The modular and standardized design of DiTs encourages exploration into multimodal generative tasks and content creation strategies that extend beyond conventional image synthesis.

Conclusion

"Scalable Diffusion Models with Transformers" makes a compelling case for transitioning from U-Net-based architectures to transformer-based designs within the context of latent diffusion models. The paper demonstrates that DiTs not only achieve state-of-the-art performance—evidenced by a remarkable FID of 2.27 on 256×256 ImageNet—but also scale efficiently with increases in transformer depth, width, and patch tokens. By adopting techniques like adaLN-Zero, the research addresses common training challenges and reinforces the feasibility of transformer architectures in high-quality generative modeling.

This comprehensive evaluation provides detailed insights into both the theoretical underpinnings and practical implications of adopting transformers in diffusion models, thus laying the groundwork for further research in generative AI that emphasizes scalability and efficiency across diverse applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. William Peebles (6 papers)
  2. Saining Xie (60 papers)
Citations (1,205)
Youtube Logo Streamline Icon: https://streamlinehq.com