Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MC$^2$: Multi-concept Guidance for Customized Multi-concept Generation (2404.05268v3)

Published 8 Apr 2024 in cs.CV

Abstract: Customized text-to-image generation, which synthesizes images based on user-specified concepts, has made significant progress in handling individual concepts. However, when extended to multiple concepts, existing methods often struggle with properly integrating different models and avoiding the unintended blending of characteristics from distinct concepts. In this paper, we propose MC$2$, a novel approach for multi-concept customization that enhances flexibility and fidelity through inference-time optimization. MC$2$ enables the integration of multiple single-concept models with heterogeneous architectures. By adaptively refining attention weights between visual and textual tokens, our method ensures that image regions accurately correspond to their associated concepts while minimizing interference between concepts. Extensive experiments demonstrate that MC$2$ outperforms training-based methods in terms of prompt-reference alignment. Furthermore, MC$2$ can be seamlessly applied to text-to-image generation, providing robust compositional capabilities. To facilitate the evaluation of multi-concept customization, we also introduce a new benchmark, MC++. The code will be publicly available at https://github.com/JIANGJiaXiu/MC-2.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jiaxiu Jiang (1 paper)
  2. Yabo Zhang (13 papers)
  3. Kailai Feng (4 papers)
  4. Xiaohe Wu (23 papers)
  5. Wangmeng Zuo (279 papers)
  6. Wenbo Li (115 papers)
  7. Renjing Pei (26 papers)
  8. Fan Li (191 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.