ObjectComposer: Consistent Generation of Multiple Objects Without Fine-tuning (2310.06968v1)
Abstract: Recent text-to-image generative models can generate high-fidelity images from text prompts. However, these models struggle to consistently generate the same objects in different contexts with the same appearance. Consistent object generation is important to many downstream tasks like generating comic book illustrations with consistent characters and setting. Numerous approaches attempt to solve this problem by extending the vocabulary of diffusion models through fine-tuning. However, even lightweight fine-tuning approaches can be prohibitively expensive to run at scale and in real-time. We introduce a method called ObjectComposer for generating compositions of multiple objects that resemble user-specified images. Our approach is training-free, leveraging the abilities of preexisting models. We build upon the recent BLIP-Diffusion model, which can generate images of single objects specified by reference images. ObjectComposer enables the consistent generation of compositions containing multiple specific objects simultaneously, all without modifying the weights of the underlying models.
- Multidiffusion: Fusing diffusion paths for controlled image generation, 2023.
- An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022.
- Prompt-to-prompt image editing with cross attention control, 2022.
- Denoising diffusion probabilistic models, 2020.
- Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing, 2023.
- Null-text inversion for editing real images using guided diffusion models, 2022.
- Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1):62–66, 1979.
- High-resolution image synthesis with latent diffusion models, 2022.
- Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation, 2023.
- Photorealistic text-to-image diffusion models with deep language understanding, 2022.