Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unsupervised Image-to-Image Translation with Self-Attention Networks (1901.08242v4)

Published 24 Jan 2019 in cs.CV

Abstract: Unsupervised image translation aims to learn the transformation from a source domain to another target domain given unpaired training data. Several state-of-the-art works have yielded impressive results in the GANs-based unsupervised image-to-image translation. It fails to capture strong geometric or structural changes between domains, or it produces unsatisfactory result for complex scenes, compared to local texture mapping tasks such as style transfer. Recently, SAGAN (Han Zhang, 2018) showed that the self-attention network produces better results than the convolution-based GAN. However, the effectiveness of the self-attention network in unsupervised image-to-image translation tasks have not been verified. In this paper, we propose an unsupervised image-to-image translation with self-attention networks, in which long range dependency helps to not only capture strong geometric change but also generate details using cues from all feature locations. In experiments, we qualitatively and quantitatively show superiority of the proposed method compared to existing state-of-the-art unsupervised image-to-image translation task. The source code and our results are online: https://github.com/itsss/img2img_sa and http://itsc.kr/2019/01/24/2019_img2img_sa

Citations (17)

Summary

  • The paper introduces a novel framework that integrates self-attention within GAN architectures to better manage strong geometric transformations.
  • It leverages bidirectional reconstruction and adversarial losses to maintain semantic consistency and high visual fidelity in unpaired image translation.
  • Quantitative FID scores and qualitative user studies confirm its superior performance over methods like CycleGAN and MUNIT.

Unsupervised Image-to-Image Translation with Self-Attention Networks: A Detailed Overview

The paper "Unsupervised Image-to-Image Translation with Self-Attention Networks" by Taewon Kang and Kwang Hee Lee introduces a novel approach to address the challenges in unsupervised image-to-image translation using self-attention mechanisms. The primary objective of this work is to improve the quality of image translation tasks where strong geometric transformations are required between the source and target domains, utilizing unpaired data. The research exploits the advantages of self-attention networks to capture long-range dependencies, which allows for better handling of details and geometric changes compared to traditional GAN-based methods.

Background and Motivation

Image-to-image translation comprises various tasks such as inpainting, super-resolution, and style transfer, among others. Typically, these tasks are solved using paired training data when available. However, acquiring such paired datasets can be both difficult and costly, motivating the need for unsupervised methods. Previous approaches using GANs have shown promise in unsupervised settings, but these solutions often struggle to maintain semantic consistency across domains, particularly with geometric transformations beyond texture mapping tasks.

The paper leverages the success of self-attention generative adversarial networks (SAGAN) to enhance the image-to-image translation framework. SAGAN demonstrated improved results over traditional convolution-based GANs for conditional tasks, inspiring this application to the challenges present in cross-domain translation.

Proposed Methodology

The authors propose the integration of self-attention mechanisms within the generator and discriminator architectures of their model. This approach enables the model to capture long-range dependencies across the image, which is critical for successful translation tasks involving significant geometric and contextual transformations.

Their model integrates self-attention layers at specific stages within the Multimodal Unsupervised Image-to-Image Translation (MUNIT) framework. This integration allows the generator to synthesize coherent images with consistent attention to minute details across different regions, thus improving the overall visual fidelity. For the discriminator, self-attention helps enforce more accurate geometric constraints on the generated images.

The model’s objective function includes both a bidirectional reconstruction loss, which accounts for image and latent code reconstruction, and an adversarial loss for ensuring consistency in distribution between translated and real images.

Experimental Evaluation

To demonstrate the effectiveness of their approach, the authors conducted experiments on several datasets, including Cat2Dog, CelebA (human faces), and Edges2Shoes datasets. Results across various translation tasks showed the proposed method outperforms existing techniques like CycleGAN, DRIT, UNIT, and MUNIT, especially when dealing with strong geometric transformations.

Quantitative assessments using the Fréchet Inception Distance (FID) confirmed that images generated using the proposed self-attention model closely matched the statistics of real images better than those derived from competing methods. Qualitative evaluations, including a user paper, further demonstrated the preference for images generated by the proposed model over others.

Conclusion and Implications

This paper proposes a significant advancement in the field of unsupervised image-to-image translation by successfully integrating self-attention mechanisms within the architecture of GAN networks. The results indicate a clear improvement in handling images with complex geometric transformations. The contributions of this research have implications for various practical applications in areas requiring robust image modification capabilities without the need for paired training data.

Future research can explore extending self-attention mechanisms to address additional challenges in image-to-image translation, including higher resolution tasks and real-time processing needs. The proposed framework lays the groundwork for further development in leveraging attention-based mechanisms in generative models for unsupervised learning.

Youtube Logo Streamline Icon: https://streamlinehq.com