Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model (2309.00810v1)

Published 2 Sep 2023 in cs.CV and cs.AI

Abstract: Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions. Text-to-image generation using neural networks could be traced back to the emergence of Generative Adversial Network (GAN), followed by the autoregressive Transformer. Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps. As an effect of the impressive results of diffusion models on image synthesis, it has been cemented as the major image decoder used by text-to-image models and brought text-to-image generation to the forefront of machine-learning (ML) research. In the era of large models, scaling up model size and the integration with LLMs have further improved the performance of TTI models, resulting the generation result nearly indistinguishable from real-world images, revolutionizing the way we retrieval images. Our explorative study has incentivised us to think that there are further ways of scaling text-to-image models with the combination of innovative model architectures and prediction enhancement techniques. We have divided the work of this survey into five main sections wherein we detail the frameworks of major literature in order to delve into the different types of text-to-image generation methods. Following this we provide a detailed comparison and critique of these methods and offer possible pathways of improvement for future work. In the future work, we argue that TTI development could yield impressive productivity improvements for creation, particularly in the context of the AIGC era, and could be extended to more complex tasks such as video generation and 3D generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Fengxiang Bie (1 paper)
  2. Yibo Yang (80 papers)
  3. Zhongzhu Zhou (7 papers)
  4. Adam Ghanem (2 papers)
  5. Minjia Zhang (54 papers)
  6. Zhewei Yao (64 papers)
  7. Xiaoxia Wu (30 papers)
  8. Connor Holmes (20 papers)
  9. Pareesa Golnari (1 paper)
  10. David A. Clifton (54 papers)
  11. Yuxiong He (59 papers)
  12. Dacheng Tao (829 papers)
  13. Shuaiwen Leon Song (35 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com