Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Counting Guidance for High Fidelity Text-to-Image Synthesis (2306.17567v2)

Published 30 Jun 2023 in cs.CV

Abstract: Recently, there have been significant improvements in the quality and performance of text-to-image generation, largely due to the impressive results attained by diffusion models. However, text-to-image diffusion models sometimes struggle to create high-fidelity content for the given input prompt. One specific issue is their difficulty in generating the precise number of objects specified in the text prompt. For example, when provided with the prompt "five apples and ten lemons on a table," images generated by diffusion models often contain an incorrect number of objects. In this paper, we present a method to improve diffusion models so that they accurately produce the correct object count based on the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To address the presence of multiple types of objects in the prompt, we utilize novel attention map guidance to obtain high-quality masks for each object. Finally, we guide the denoising process using the calculated gradients for each object. Through extensive experiments and evaluation, we demonstrate that the proposed method significantly enhances the fidelity of diffusion models with respect to object count.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Universal Guidance for Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 843–852.
  2. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. arXiv preprint arXiv:2301.13826.
  3. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34: 8780–8794.
  4. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  5. DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models. arXiv preprint arXiv:2305.16381.
  6. Generative adversarial nets. Advances in neural information processing systems, 27.
  7. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626.
  8. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33: 6840–6851.
  9. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598.
  10. Learning to count anything: Reference-less class-agnostic counting with weak supervision. arXiv preprint arXiv:2205.10203.
  11. CLIP-Count: Towards Text-Guided Zero-Shot Object Counting. arXiv preprint arXiv:2305.07304.
  12. Alias-free generative adversarial networks. Advances in Neural Information Processing Systems, 34.
  13. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4401–4410.
  14. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8110–8119.
  15. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192.
  16. GLIGEN: Open-Set Grounded Text-to-Image Generation. arXiv preprint arXiv:2301.07093.
  17. LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models. arXiv preprint arXiv:2305.13655.
  18. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741.
  19. Teaching clip to count to ten. arXiv preprint arXiv:2302.12066.
  20. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2085–2094.
  21. Grounded Text-to-Image Synthesis with Attention Refocusing. arXiv preprint arXiv:2306.05427.
  22. Learning transferable visual models from natural language supervision. In International conference on machine learning, 8748–8763. PMLR.
  23. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125.
  24. Exemplar free class agnostic counting. In Proceedings of the Asian Conference on Computer Vision, 3121–3137.
  25. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–10695.
  26. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35: 36479–36494.
  27. Represent, compare, and learn: A similarity-aware framework for class-agnostic counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9529–9538.
  28. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502.
  29. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32.
  30. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456.
  31. Tedigan: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2256–2265.
  32. Zero-Shot Object Counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15548–15557.
  33. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1316–1324.
  34. Few-shot object counting with similarity-aware feature enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 6315–6324.
  35. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision, 5907–5915.
  36. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. IEEE transactions on pattern analysis and machine intelligence, 41(8): 1947–1962.
  37. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543.
  38. Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models. arXiv preprint arXiv:2305.05189.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Wonjun Kang (9 papers)
  2. Kevin Galim (6 papers)
  3. Hyung Il Koo (9 papers)
  4. Nam Ik Cho (38 papers)
Citations (5)