This paper introduces LocInv (Localization-aware Inversion), a method designed to improve text-guided image editing using diffusion models like Stable Diffusion. The core problem addressed is cross-attention leakage, where existing editing techniques inadvertently modify regions outside the intended target area because the model's attention mechanism doesn't perfectly align object concepts in the text prompt with the correct spatial regions in the image. This is particularly challenging in images with multiple objects.
To combat this, LocInv incorporates localization priors – specifically, segmentation masks or bounding boxes – during the DDIM inversion process. These priors, which can be obtained from foundation models like SAM or Grounding DINO, guide the refinement of cross-attention maps.
The key mechanism involves dynamic prompt learning, where the token embeddings corresponding to noun words in the text prompt are updated at each timestep of the denoising process. This update is guided by optimization losses designed to align the cross-attention maps () with the provided localization priors ():
- Similarity Loss (): Encourages high cosine similarity between the attention map of a noun token and its corresponding localization prior.
- Overlapping Loss (): Maximizes the portion of the attention map that falls within the localization prior region.
These losses are combined () and optimized iteratively for the noun tokens () at each timestep . To prevent overfitting and manage the gradual accumulation of errors, the optimization uses a gradual threshold mechanism (), ensuring losses reach predefined, decreasing thresholds over time.
Furthermore, LocInv addresses a common limitation in attribute editing (e.g., changing an object's color or material). It introduces an Adjective Binding Loss (). Using a parser like Spacy to identify adjective-noun pairs (), this loss encourages the attention map of the adjective to align with the attention map of its corresponding noun:
This loss is added to the total loss when attribute editing is required.
To ensure the original image can still be reconstructed accurately after inversion, LocInv integrates Null-Text Inversion (NTI), optimizing null-text embeddings () at each step alongside the dynamic noun/adjective tokens. The final output of the LocInv process is the initial noise vector (), the set of optimized dynamic tokens (), and the optimized null-text embeddings (). These are then used with an editing method like Prompt-to-Prompt (P2P) for the actual image manipulation.
Experiments were conducted on a COCO-edit subset derived from MS-COCO, comparing LocInv (using both segmentation and detection priors) against methods like NTI, DPL, PnP, DiffEdit, MasaCtrl, pix2pix-zero, and fine-tuning/inpainting approaches. LocInv demonstrated superior performance in quantitative metrics (LPIPS, SSIM, PSNR, DINO-Sim, background preservation) and qualitative results, especially for multi-object scenes and attribute editing tasks (Word-Swap, Attribute-Edit). Ablation studies confirmed the effectiveness of the proposed losses and hyperparameters. User studies also indicated a preference for LocInv's editing quality and background preservation compared to other non-finetuning methods.
The main contribution is a method that significantly reduces cross-attention leakage by leveraging readily available localization priors, leading to more precise text-guided image editing without needing model fine-tuning, and enabling effective attribute modification.