Diffusion models are at the forefront of AI developments in image generation, thanks to their power and high-quality results. Recent advances have seen transformative impacts on various applications from text-to-image generation to complex scene creations that were previously unattainable.
A new paper introduces Diffusion Vision Transformers (DiffiT), proposing an innovative layer into diffusion-based generative learning: the time-dependent self-attention module. This module allows the attention layers within the models to adjust dynamically at different stages of the image denoising process, thereby efficiently adapting to both the temporal dynamics of diffusion and the spatial long-range dependencies within the images.
At the core of this system is a U-shaped encoder-decoder architecture drawing inspiration from vision transformers (ViTs), a very successful family of models in visual AI tasks. Unlike existing denoising diffusion models, DiffiT adapts both its structural and attention elements depending on the time step in the image generation process. This results in different attention focus at the beginning when images are primarily noise, and toward the end when high-frequency details are being refined.
The researchers benchmarked DiffiT on several datasets, including ImageNet and CIFAR-10, achieving state-of-the-art results in both image and latent space generation tasks. Notably, in the latent space generation, used for creating high-resolution images from abstract or compressed representations, DiffiT set a new top performance metric for the ImageNet-256 dataset.
In a series of experiments, the authors demonstrated that the design choices in DiffiT, particularly the integration of the time-dependant self-attention, are crucial. Ablation studies further showed that different configurations of the model's components greatly affect performance. For instance, decoupling spatial and temporal information within the self-attention module resulted in notably poorer results, underscoring the importance of their integration for the model's efficiency.
In conclusion, DiffiT represents a significant advancement in diffusion-based image generation models. With its novel time-dependent self-attention mechanism and transformer-based architecture, it sets new standards in the quality of generated images, displaying impressive control over the synthesis process at various temporal stages. The open-source code repository offers the community a valuable resource to further explore and expand upon these results.