Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models (2305.15194v2)

Published 24 May 2023 in cs.CV, cs.AI, and cs.LG

Abstract: In this study, we aim to extend the capabilities of diffusion-based text-to-image (T2I) generation models by incorporating diverse modalities beyond textual description, such as sketch, box, color palette, and style embedding, within a single model. We thus design a multimodal T2I diffusion model, coined as DiffBlender, by separating the channels of conditions into three types, i.e., image forms, spatial tokens, and non-spatial tokens. The unique architecture of DiffBlender facilitates adding new input modalities, pioneering a scalable framework for conditional image generation. Notably, we achieve this without altering the parameters of the existing generative model, Stable Diffusion, only with updating partial components. Our study establishes new benchmarks in multimodal generation through quantitative and qualitative comparisons with existing conditional generation methods. We demonstrate that DiffBlender faithfully blends all the provided information and showcase its various applications in the detailed image synthesis.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sungnyun Kim (19 papers)
  2. Junsoo Lee (13 papers)
  3. Kibeom Hong (12 papers)
  4. Daesik Kim (15 papers)
  5. Namhyuk Ahn (18 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.