Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale (2303.06555v2)

Published 12 Mar 2023 in cs.LG and cs.CV

Abstract: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is -- learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Fan Bao (30 papers)
  2. Shen Nie (10 papers)
  3. Kaiwen Xue (12 papers)
  4. Chongxuan Li (75 papers)
  5. Shi Pu (109 papers)
  6. Yaole Wang (2 papers)
  7. Gang Yue (2 papers)
  8. Yue Cao (147 papers)
  9. Hang Su (224 papers)
  10. Jun Zhu (424 papers)
Citations (123)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets