Paragraph-to-Image Generation with Information-Enriched Diffusion Model (2311.14284v2)
Abstract: Text-to-image (T2I) models have recently experienced rapid development, achieving astonishing performance in terms of fidelity and textual alignment capabilities. However, given a long paragraph (up to 512 words), these generation models still struggle to achieve strong alignment and are unable to generate images depicting complex scenes. In this paper, we introduce an information-enriched diffusion model for paragraph-to-image generation task, termed ParaDiffusion, which delves into the transference of the extensive semantic comprehension capabilities of LLMs to the task of image generation. At its core is using a LLM (e.g., Llama V2) to encode long-form text, followed by fine-tuning with LORA to alignthe text-image feature spaces in the generation task. To facilitate the training of long-text semantic alignment, we also curated a high-quality paragraph-image pair dataset, namely ParaImage. This dataset contains a small amount of high-quality, meticulously annotated data, and a large-scale synthetic dataset with long text descriptions being generated using a vision-LLM. Experiments demonstrate that ParaDiffusion outperforms state-of-the-art models (SD XL, DeepFloyd IF) on ViLG-300 and ParaPrompts, achieving up to 15% and 45% human voting rate improvements for visual appeal and text faithfulness, respectively. The code and dataset will be released to foster community research on long-text alignment.
- Weijia Wu (47 papers)
- Zhuang Li (69 papers)
- Yefei He (19 papers)
- Mike Zheng Shou (165 papers)
- Chunhua Shen (404 papers)
- Lele Cheng (6 papers)
- Yan Li (505 papers)
- Tingting Gao (25 papers)
- Di Zhang (230 papers)
- Zhongyuan Wang (105 papers)