Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain Adaptation (2005.14007v3)

Published 25 May 2020 in cs.LG and stat.ML

Abstract: A complex combination of simultaneous supervised-unsupervised learning is believed to be the key to humans performing tasks seamlessly across multiple domains or tasks. This phenomenon of cross-domain learning has been very well studied in domain adaptation literature. Recent domain adaptation works rely on an indirect way of first aligning the source and target domain distributions and then train a classifier on the labeled source domain to classify the target domain. However, this approach has the main drawback that obtaining a near-perfect alignment of the domains in itself might be difficult/impossible (e.g., language domains). To address this, we follow Vapnik's imperative of statistical learning that states any desired problem should be solved in the most direct way rather than solving a more general intermediate task and propose a direct approach to domain adaptation that does not require domain alignment. We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way and classify in a supervised way on the source domain. We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings. We also notice that the contradistinguish loss improves the model performance by increasing the shape bias.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sourabh Balgi (7 papers)
  2. Ambedkar Dukkipati (76 papers)

Summary

We haven't generated a summary for this paper yet.