Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vector Quantized Image-to-Image Translation (2207.13286v1)

Published 27 Jul 2022 in cs.CV

Abstract: Current image-to-image translation methods formulate the task with conditional generation models, leading to learning only the recolorization or regional changes as being constrained by the rich structural information provided by the conditional contexts. In this work, we propose introducing the vector quantization technique into the image-to-image translation framework. The vector quantized content representation can facilitate not only the translation, but also the unconditional distribution shared among different domains. Meanwhile, along with the disentangled style representation, the proposed method further enables the capability of image extension with flexibility in both intra- and inter-domains. Qualitative and quantitative experiments demonstrate that our framework achieves comparable performance to the state-of-the-art image-to-image translation and image extension methods. Compared to methods for individual tasks, the proposed method, as a unified framework, unleashes applications combining image-to-image translation, unconditional generation, and image extension altogether. For example, it provides style variability for image generation and extension, and equips image-to-image translation with further extension capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yu-Jie Chen (13 papers)
  2. Shin-I Cheng (3 papers)
  3. Wei-Chen Chiu (54 papers)
  4. Hung-Yu Tseng (31 papers)
  5. Hsin-Ying Lee (60 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.