Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Diffusion-based Image Translation using Disentangled Style and Content Representation (2209.15264v2)

Published 30 Sep 2022 in cs.CV, cs.AI, cs.LG, and stat.ML

Abstract: Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer which is not limited to the specific domains. Unfortunately, due to the stochastic nature of diffusion models, it is often difficult to maintain the original content of the image during the reverse diffusion. To address this, here we present a novel diffusion-based unsupervised image translation method using disentangled style and content representation. Specifically, inspired by the splicing Vision Transformer, we extract intermediate keys of multihead self attention layer from ViT model and used them as the content preservation loss. Then, an image guided style transfer is performed by matching the [CLS] classification token from the denoised samples and target image, whereas additional CLIP loss is used for the text-driven style transfer. To further accelerate the semantic change during the reverse diffusion, we also propose a novel semantic divergence loss and resampling strategy. Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.

Citations (126)

Summary

  • The paper introduces a novel approach using disentangled style and content representations, enabling flexible and effective image translation.
  • It employs a Vision Transformer-based content preservation technique via intermediate multihead self-attention keys to maintain original image details.
  • The method integrates a semantic divergence loss and CLIP loss to accelerate translation and align style with both image and textual guidance.

The paper "Diffusion-based Image Translation using Disentangled Style and Content Representation" presents a novel approach to image translation using diffusion models. The focus is on overcoming the challenge of maintaining the original content of an image during reverse diffusion, a common issue given the stochastic nature of these models.

Key Contributions:

  1. Disentangled Representation:
    • The authors propose using disentangled style and content representation to enhance image translation. This involves separating the style and content features of an image to allow for more flexible style transfer.
  2. Content Preservation:
    • Utilizing the Vision Transformer (ViT), the paper introduces a content preservation technique by extracting intermediate keys from the multihead self-attention layers. These serve as a content preservation loss, ensuring that the essential content of the original image is maintained during translation.
  3. Style Transfer Mechanism:
    • For image-guided style transfer, the method involves matching the CLS token from denoised samples with that of the target image. In text-driven style transfer, an additional CLIP loss is employed to guide the translation according to textual semantics.
  4. Semantic Divergence Loss:
    • To enhance semantic changes during reverse diffusion, a novel semantic divergence loss and a resampling strategy are proposed. These innovations aim to accelerate and refine the translation process by promoting significant semantic alterations.

Experimental Validation:

The proposed method is shown to outperform state-of-the-art models in both text-guided and image-guided image translation tasks. The results demonstrate the efficacy of the approach in maintaining content fidelity while achieving desired style transformations.

In summary, the paper advances the field of image translation by addressing a critical limitation of diffusion models through sophisticated techniques grounded in transformer architectures and semantic guidance. The integration of disentangled representations and novel loss mechanisms positions this work as a significant contribution to unsupervised image translation.