Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unified Multi-Modal Latent Diffusion for Joint Subject and Text Conditional Image Generation (2303.09319v1)

Published 16 Mar 2023 in cs.CV

Abstract: Language-guided image generation has achieved great success nowadays by using diffusion models. However, texts can be less detailed to describe highly-specific subjects such as a particular dog or a certain car, which makes pure text-to-image generation not accurate enough to satisfy user requirements. In this work, we present a novel Unified Multi-Modal Latent Diffusion (UMM-Diffusion) which takes joint texts and images containing specified subjects as input sequences and generates customized images with the subjects. To be more specific, both input texts and images are encoded into one unified multi-modal latent space, in which the input images are learned to be projected to pseudo word embedding and can be further combined with text to guide image generation. Besides, to eliminate the irrelevant parts of the input images such as background or illumination, we propose a novel sampling technique of diffusion models used by the image generator which fuses the results guided by multi-modal input and pure text input. By leveraging the large-scale pre-trained text-to-image generator and the designed image encoder, our method is able to generate high-quality images with complex semantics from both aspects of input texts and images.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yiyang Ma (15 papers)
  2. Huan Yang (306 papers)
  3. Wenjing Wang (23 papers)
  4. Jianlong Fu (91 papers)
  5. Jiaying Liu (99 papers)
Citations (59)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets