Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Your Classifier can Secretly Suffice Multi-Source Domain Adaptation (2103.11169v1)

Published 20 Mar 2021 in cs.LG and cs.CV

Abstract: Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Naveen Venkat (6 papers)
  2. Jogendra Nath Kundu (26 papers)
  3. Durgesh Kumar Singh (1 paper)
  4. Ambareesh Revanur (9 papers)
  5. R. Venkatesh Babu (108 papers)
Citations (66)

Summary

We haven't generated a summary for this paper yet.