Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LAFITE: Towards Language-Free Training for Text-to-Image Generation (2111.13792v3)

Published 27 Nov 2021 in cs.CV and cs.LG

Abstract: One of the major challenges in training text-to-image generation models is the need of a large number of high-quality image-text pairs. While image samples are often easily accessible, the associated text descriptions typically require careful human captioning, which is particularly time- and cost-consuming. In this paper, we propose the first work to train text-to-image generation models without any text data. Our method leverages the well-aligned multi-modal semantic space of the powerful pre-trained CLIP model: the requirement of text-conditioning is seamlessly alleviated via generating text features from image features. Extensive experiments are conducted to illustrate the effectiveness of the proposed method. We obtain state-of-the-art results in the standard text-to-image generation tasks. Importantly, the proposed language-free model outperforms most existing models trained with full image-text pairs. Furthermore, our method can be applied in fine-tuning pre-trained models, which saves both training time and cost in training text-to-image generation models. Our pre-trained model obtains competitive results in zero-shot text-to-image generation on the MS-COCO dataset, yet with around only 1% of the model size and training data size relative to the recently proposed large DALL-E model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yufan Zhou (36 papers)
  2. Ruiyi Zhang (98 papers)
  3. Changyou Chen (108 papers)
  4. Chunyuan Li (122 papers)
  5. Chris Tensmeyer (13 papers)
  6. Tong Yu (119 papers)
  7. Jiuxiang Gu (73 papers)
  8. Jinhui Xu (50 papers)
  9. Tong Sun (49 papers)
Citations (145)