Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing (2305.14720v2)

Published 24 May 2023 in cs.CV and cs.AI

Abstract: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Code and models will be released at https://github.com/salesforce/LAVIS/tree/main/projects/blip-diffusion. Project page at https://dxli94.github.io/BLIP-Diffusion-website/.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543, 2023.
  2. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
  3. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, pages 16784–16804. PMLR, 2022.
  4. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
  5. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
  6. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
  7. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
  8. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022.
  9. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242, 2022.
  10. Multi-concept customization of text-to-image diffusion. arXiv preprint arXiv:2212.04488, 2022.
  11. Designing an encoder for fast personalization of text-to-image models. arXiv preprint arXiv:2302.12228, 2023.
  12. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
  13. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021.
  14. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
  15. Diffusion models beat gans on image synthesis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 8780–8794. Curran Associates, Inc., 2021.
  16. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  17. Taming encoder for zero fine-tuning image customization with text-to-image diffusion models. arXiv preprint arXiv:2304.02642, 2023.
  18. Instantbooth: Personalized text-to-image generation without test-time finetuning. arXiv preprint arXiv:2304.03411, 2023.
  19. Subject-driven text-to-image generation via apprenticeship learning. arXiv preprint arXiv:2304.00186, 2023.
  20. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  21. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
  22. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International Journal of Computer Vision, 128(7):1956–1981, 2020.
  23. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086–7096, 2022.
  24. A closed-form solution to natural image matting. IEEE transactions on pattern analysis and machine intelligence, 30(2):228–242, 2007.
  25. Pymatting: A python library for alpha matting. Journal of Open Source Software, 5(54):2481, 2020.
  26. Decoupled weight decay regularization. In International Conference on Learning Representations, 2017.
  27. Edict: Exact diffusion inversion via coupled transformations. arXiv preprint arXiv:2211.12446, 2022.
  28. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
  29. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888–12900. PMLR, 2022.
  30. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  31. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73, 2017.
  32. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, 2018.
  33. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558–3568, 2021.
  34. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186, 2019.
  35. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800, 2022.
  36. Re-imagen: Retrieval-augmented text-to-image generator. arXiv preprint arXiv:2209.14491, 2022.
  37. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021.
  38. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011.
  39. Pseudo numerical methods for diffusion models on manifolds. In International Conference on Learning Representations, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Dongxu Li (40 papers)
  2. Junnan Li (56 papers)
  3. Steven C. H. Hoi (94 papers)
Citations (219)

Summary

BLIP-Diffusion: Enhancements in Subject-Driven Text-to-Image Generation

The paper "BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing" introduces an innovative approach in the domain of text-to-image generation. The researchers present the BLIP-Diffusion model with an emphasis on subject-driven image generation using a pre-trained multimodal encoder. This method addresses key limitations of existing models, such as lengthy fine-tuning durations and challenges in maintaining subject fidelity.

Methodology

BLIP-Diffusion leverages a multimodal encoder trained to produce a subject representation that aligns with a text prompt. This aligns with the existing BLIP-2 framework and employs a latent diffusion model, specifically Stable Diffusion, for the generation process. The architecture incorporates visual representations infused within text prompt embeddings to guide the image generation.

A two-stage pre-training strategy is pivotal to the model's success. Initially, multimodal representation learning aligns visual representation with textual data. Subsequently, a subject representation learning stage enhances the diffusion model's ability to generate new renditions of the input subject.

Numerical Results and Comparisons

Empirical evaluations demonstrate that BLIP-Diffusion achieves impressive zero-shot subject generation and significantly enhanced fine-tuning efficiency, reporting up to a 20x speedup over traditional methods like DreamBooth. The authors provide comprehensive comparisons on the DreamBooth dataset, showcasing superior fidelity and prompt relevance. Additionally, DINO and CLIP metrics validate the model's effectiveness in subject alignment and image-text congruity.

Practical and Theoretical Implications

Practically, BLIP-Diffusion expands the potential applications of text-to-image models by allowing flexible and high-fidelity generation. This flexibility is further enhanced by combining the model with existing techniques such as ControlNet and prompt-to-prompt editing. Theoretically, the approach shifts the paradigm towards using pre-trained representations for subject-driven tasks, reducing the dependency on extensive fine-tuning.

Future Directions

Looking ahead, BLIP-Diffusion's framework paves the way for explorations into more generalized subject representations, potentially enabling broader applications across diverse subject categories. Additionally, the integration with other modalities, or refining the multimodal pre-training processes, could enhance both the robustness and the scope of text-to-image models.

In conclusion, BLIP-Diffusion represents a substantial methodological advancement in the field of subject-driven text-to-image generation. Its novel approach to leveraging pre-trained multimodal representations offers both theoretical insights and practical tools for advancing AI-driven creativity and productivity.