Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DiffiT: Diffusion Vision Transformers for Image Generation (2312.02139v3)

Published 4 Dec 2023 in cs.CV, cs.AI, and cs.LG

Abstract: Diffusion models with their powerful expressivity and high sample quality have achieved State-Of-The-Art (SOTA) performance in the generative domain. The pioneering Vision Transformer (ViT) has also demonstrated strong modeling capabilities and scalability, especially for recognition tasks. In this paper, we study the effectiveness of ViTs in diffusion-based generative learning and propose a new model denoted as Diffusion Vision Transformers (DiffiT). Specifically, we propose a methodology for finegrained control of the denoising process and introduce the Time-dependant Multihead Self Attention (TMSA) mechanism. DiffiT is surprisingly effective in generating high-fidelity images with significantly better parameter efficiency. We also propose latent and image space DiffiT models and show SOTA performance on a variety of class-conditional and unconditional synthesis tasks at different resolutions. The Latent DiffiT model achieves a new SOTA FID score of 1.73 on ImageNet256 dataset while having 19.85%, 16.88% less parameters than other Transformer-based diffusion models such as MDT and DiT,respectively. Code: https://github.com/NVlabs/DiffiT

Diffusion models are at the forefront of AI developments in image generation, thanks to their power and high-quality results. Recent advances have seen transformative impacts on various applications from text-to-image generation to complex scene creations that were previously unattainable.

A new paper introduces Diffusion Vision Transformers (DiffiT), proposing an innovative layer into diffusion-based generative learning: the time-dependent self-attention module. This module allows the attention layers within the models to adjust dynamically at different stages of the image denoising process, thereby efficiently adapting to both the temporal dynamics of diffusion and the spatial long-range dependencies within the images.

At the core of this system is a U-shaped encoder-decoder architecture drawing inspiration from vision transformers (ViTs), a very successful family of models in visual AI tasks. Unlike existing denoising diffusion models, DiffiT adapts both its structural and attention elements depending on the time step in the image generation process. This results in different attention focus at the beginning when images are primarily noise, and toward the end when high-frequency details are being refined.

The researchers benchmarked DiffiT on several datasets, including ImageNet and CIFAR-10, achieving state-of-the-art results in both image and latent space generation tasks. Notably, in the latent space generation, used for creating high-resolution images from abstract or compressed representations, DiffiT set a new top performance metric for the ImageNet-256 dataset.

In a series of experiments, the authors demonstrated that the design choices in DiffiT, particularly the integration of the time-dependant self-attention, are crucial. Ablation studies further showed that different configurations of the model's components greatly affect performance. For instance, decoupling spatial and temporal information within the self-attention module resulted in notably poorer results, underscoring the importance of their integration for the model's efficiency.

In conclusion, DiffiT represents a significant advancement in diffusion-based image generation models. With its novel time-dependent self-attention mechanism and transformer-based architecture, it sets new standards in the quality of generated images, displaying impressive control over the synthesis process at various temporal stages. The open-source code repository offers the community a valuable resource to further explore and expand upon these results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ali Hatamizadeh (33 papers)
  2. Jiaming Song (78 papers)
  3. Guilin Liu (78 papers)
  4. Jan Kautz (215 papers)
  5. Arash Vahdat (69 papers)
Citations (37)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com