Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation (2103.16765v1)

Published 31 Mar 2021 in cs.CV

Abstract: Unsupervised Domain Adaptation (UDA) transfers predictive models from a fully-labeled source domain to an unlabeled target domain. In some applications, however, it is expensive even to collect labels in the source domain, making most previous works impractical. To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage. However, the instance-wise self-supervised learning only learns and aligns low-level discriminative features. In this paper, we propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA). PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains. Our framework captures category-wise semantic structures of the data by in-domain prototypical contrastive learning; and performs feature alignment through cross-domain prototypical self-supervision. Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively. Our project page is at http://xyue.io/pcs-fuda/index.html

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiangyu Yue (93 papers)
  2. Zangwei Zheng (19 papers)
  3. Shanghang Zhang (173 papers)
  4. Yang Gao (761 papers)
  5. Trevor Darrell (324 papers)
  6. Kurt Keutzer (200 papers)
  7. Alberto Sangiovanni Vincentelli (8 papers)
Citations (140)

Summary

We haven't generated a summary for this paper yet.