Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Implicit Pairs for Boosting Unpaired Image-to-Image Translation (1904.06913v4)

Published 15 Apr 2019 in cs.CV and cs.LG

Abstract: In image-to-image translation the goal is to learn a mapping from one image domain to another. In the case of supervised approaches the mapping is learned from paired samples. However, collecting large sets of image pairs is often either prohibitively expensive or not possible. As a result, in recent years more attention has been given to techniques that learn the mapping from unpaired sets. In our work, we show that injecting implicit pairs into unpaired sets strengthens the mapping between the two domains, improves the compatibility of their distributions, and leads to performance boosting of unsupervised techniques by over 14% across several measurements. The competence of the implicit pairs is further displayed with the use of pseudo-pairs, i.e., paired samples which only approximate a real pair. We demonstrate the effect of the approximated implicit samples on image-to-image translation problems, where such pseudo-pairs may be synthesized in one direction, but not in the other. We further show that pseudo-pairs are significantly more effective as implicit pairs in an unpaired setting, than directly using them explicitly in a paired setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yiftach Ginger (2 papers)
  2. Dov Danon (9 papers)
  3. Hadar Averbuch-Elor (43 papers)
  4. Daniel Cohen-Or (172 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.