Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

One-Sided Unsupervised Domain Mapping (1706.00826v2)

Published 2 Jun 2017 in cs.CV

Abstract: In unsupervised domain mapping, the learner is given two unmatched datasets $A$ and $B$. The goal is to learn a mapping $G_{AB}$ that translates a sample in $A$ to the analog sample in $B$. Recent approaches have shown that when learning simultaneously both $G_{AB}$ and the inverse mapping $G_{BA}$, convincing mappings are obtained. In this work, we present a method of learning $G_{AB}$ without learning $G_{BA}$. This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint. Our entire code is made publicly available at https://github.com/sagiebenaim/DistanceGAN .

Citations (294)

Summary

  • The paper introduces a novel one-sided framework that preserves source domain invariants during unsupervised mapping.
  • It leverages a distance-preservation loss function to maintain structural consistency between source and target data.
  • Experiments on benchmark datasets show competitive image quality and classification accuracy compared to bidirectional techniques.

One-Sided Unsupervised Domain Mapping

The paper "One-Sided Unsupervised Domain Mapping" by Sagie Benaim and Lior Wolf presents a significant contribution to the field of unsupervised domain adaptation. This research focuses on the task of mapping samples from one domain to another without paired samples, a problem that arises in various applications where data annotations are scarce or nonexistent.

Summary

The authors introduce a novel framework for domain mapping that leverages one-sided information, specifically utilizing a single domain's characteristics to inform the mapping process. In contrast to prior approaches, which often rely on bidirectional mappings or require certain assumptions about the distribution of domains, this method constructs a mapping that preserves intrinsic properties of the source domain while achieving meaningful conversion to the target domain.

The core methodology involves training a neural network to learn the mapping in an unsupervised fashion by minimizing a distance preservation loss function. The network aims to ensure that the relative distances in the source domain are preserved post-mapping, thereby maintaining the structural integrity of the source data even in the target domain. This approach is built upon the assumption that preserving relative distances suffices to maintain significant features and characteristics inherent in the data.

Key Contributions and Results

Among the primary contributions of this paper is the demonstration that one-sided domain mapping can effectively maintain the invariant characteristics of the source. This is particularly useful in scenarios where the source domain's structure is critical and must remain unchanged in its transformation to the target domain.

Experimentation is conducted on various standard benchmark datasets. The numerical results exhibit that this one-sided approach is not only feasible but also competitive with existing bidirectional domain mapping techniques. Specific metrics, such as domain-specific image quality ratings and classification accuracy post-mapping, indicate that the proposed method often surpasses traditional techniques in preserving crucial features.

Implications and Future Directions

The implications of this research are manifold. Practically, this method provides an efficient alternative for applications such as image-to-image translation, where obtaining paired data samples is costly or impossible. Theoretically, it challenges the prevailing assumption that bidirectional mapping is necessary for effective domain adaptation, potentially sparking new lines of research into one-sided methodologies.

Future developments may explore further refinements to the distance-preservation function to enhance mapping precision or adapt similar frameworks to other tasks within unsupervised learning. Additionally, it would be viable to investigate the potential integration of this framework with other domain adaptation strategies, such as adversarial learning, to further bolster its utility and performance.

This paper lays the groundwork for rethinking how domain mappings are conceptualized in unsupervised settings, providing a foundation for both current applications and future innovations in artificial intelligence.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com