Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 73 tok/s
Gemini 3.0 Pro 52 tok/s
Gemini 2.5 Flash 155 tok/s Pro
Kimi K2 202 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering (2403.05053v3)

Published 8 Mar 2024 in cs.CV and cs.AI

Abstract: Image composition involves seamlessly integrating given objects into a specific visual context. Current training-free methods rely on composing attention weights from several samplers to guide the generator. However, since these weights are derived from disparate contexts, their combination leads to coherence confusion and loss of appearance information. These issues worsen with their excessive focus on background generation, even when unnecessary in this task. This not only impedes their swift implementation but also compromises foreground generation quality. Moreover, these methods introduce unwanted artifacts in the transition area. In this paper, we formulate image composition as a subject-based local editing task, solely focusing on foreground generation. At each step, the edited foreground is combined with the noisy background to maintain scene consistency. To address the remaining issues, we propose PrimeComposer, a faster training-free diffuser that composites the images by well-designed attention steering across different noise levels. This steering is predominantly achieved by our Correlation Diffuser, utilizing its self-attention layers at each step. Within these layers, the synthesized subject interacts with both the referenced object and background, capturing intricate details and coherent relationships. This prior information is encoded into the attention weights, which are then integrated into the self-attention layers of the generator to guide the synthesis process. Besides, we introduce a Region-constrained Cross-Attention to confine the impact of specific subject-related tokens to desired regions, addressing the unwanted artifacts shown in the prior method thereby further improving the coherence in the transition area. Our method exhibits the fastest inference efficiency and extensive experiments demonstrate our superiority both qualitatively and quantitatively.

Citations (6)

Summary

  • The paper presents a novel image composition framework that models the task as subject-based local editing to better preserve object appearance.
  • It employs a Correlation Diffuser and Region-Constrained Cross-Attention to steer self-attention and mitigate synthesis artifacts.
  • Experiments show faster inference and superior LPIPS/CLIP scores compared to existing state-of-the-art methods.

PrimeComposer: Faster Progressively Combined Diffusion for Image Composition with Attention Steering

Introduction

Image composition requires integrating user-specified objects into specific visual contexts, preserving their appearance while ensuring coherence in transitions. Existing methods often rely on attention weights from multiple samplers to guide synthesis, which leads to coherence confusion and appearance loss due to disparate contexts. PrimeComposer addresses these issues by focusing on foreground generation as a subject-based local editing task, neglecting unnecessary background generation. The proposed method utilizes well-designed attention steering across noise levels, achieved through Correlation Diffuser, and Region-constrained Cross-Attention (RCA) for artifact mitigation. Figure 1

Figure 1: Current methods encounter significant challenges in preserving the objects' appearance (left) and synthesizing natural coherence (right). The problematic areas of coherence are indicated by red dotted lines.

Methodology

PrimeComposer formulates image composition as a local editing task, emphasizing foreground generation. The Correlation Diffuser leverages self-attention layers to extract intricate detail interactions between synthesized subjects and backgrounds, while RCA confines subject-related words to specific regions. Figure 2

Figure 2: The overview of our PrimeComposer.

Self-Attention Steering

The self-attention steering mechanism utilizes prior knowledge encoded in attention weights, ensuring preservation of object appearance without compromising synthesis quality. Only appearance-related weights relevant to the decoder in U-Net are infused, countering potential style inconsistency and subject overfitting. Figure 3

Figure 3: Qualitative results regarding the unexpected coherence problem, i.e., style inconsistency.

Region-Constrained Cross-Attention

RCA mitigates artifacts by restricting the influence of specific subject-related text tokens to areas defined by masks. This precision ensures objects appear accurately within their intended regions. Figure 4

Figure 4: The effectiveness of our Region-constrained Cross Attention.

Experimental Evaluation

Tests utilizing a cross-domain composition benchmark demonstrate PrimeComposer's proficiency in various visual domains, yielding the fastest inference times and highest scores on metrics such as LPIPS and CLIP for image quality and semantic accuracy. Figure 5

Figure 5: Qualitative comparison with SOTA baselines in cross-domain image composition. All the results of TF-ICON come from its originary paper.

Quantitative Analysis

PrimeComposer outperforms competitors across multiple metrics, with substantial improvements in object appearance preservation and coherent synthesis. The method exhibits favorable inference speeds, significantly lower than prior solutions such as TF-ICON, due to fewer sampler integrations. Figure 6

Figure 6: Ablation paper of different variants of our framework. RCA: Region-constrained Cross-Attention. CFG: Classifier-free Guidance.

User Study and Societal Impacts

In user studies, participants favor PrimeComposer for seamless compositions across domains. Despite the innovation, careful attention is warranted regarding cultural representation and the potential misuse in creating misleading images. Future work might explore real-time applications, viewpoint control, and multi-object integration.

Conclusion

PrimeComposer represents an efficient strategy for image composition, leveraging attention steering for faster synthesis while maintaining high quality across diverse domains. The integration of Correlation Diffuser and RCA effectively addresses prior limitations, making it a superior choice for training-free composition. Figure 7

Figure 7: Additional cases of challenges in preserving the objects' appearance (left) and synthesizing natural coherence (right). The problematic areas of coherence are indicated by red dotted lines.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.