Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels (2003.08264v1)

Published 18 Mar 2020 in cs.CV

Abstract: Existing unsupervised domain adaptation methods aim to transfer knowledge from a label-rich source domain to an unlabeled target domain. However, obtaining labels for some source domains may be very expensive, making complete labeling as used in prior work impractical. In this work, we investigate a new domain adaptation scenario with sparsely labeled source data, where only a few examples in the source domain have been labeled, while the target domain is unlabeled. We show that when labeled source examples are limited, existing methods often fail to learn discriminative features applicable for both source and target domains. We propose a novel Cross-Domain Self-supervised (CDS) learning approach for domain adaptation, which learns features that are not only domain-invariant but also class-discriminative. Our self-supervised learning method captures apparent visual similarity with in-domain self-supervision in a domain adaptive manner and performs cross-domain feature matching with across-domain self-supervision. In extensive experiments with three standard benchmark datasets, our method significantly boosts performance of target accuracy in the new target domain with few source labels and is even helpful on classical domain adaptation scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Donghyun Kim (129 papers)
  2. Kuniaki Saito (31 papers)
  3. Tae-Hyun Oh (75 papers)
  4. Bryan A. Plummer (64 papers)
  5. Stan Sclaroff (56 papers)
  6. Kate Saenko (178 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.