Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Paced Learning for Open-Set Domain Adaptation (2303.05933v3)

Published 10 Mar 2023 in cs.CV

Abstract: Domain adaptation tackles the challenge of generalizing knowledge acquired from a source domain to a target domain with different data distributions. Traditional domain adaptation methods presume that the classes in the source and target domains are identical, which is not always the case in real-world scenarios. Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain. Open-set domain adaptation aims to not only recognize target samples belonging to common classes shared by source and target domains but also perceive unknown class samples. We propose a novel framework based on self-paced learning to distinguish common and unknown class samples precisely, referred to as SPLOS (self-paced learning for open-set). To utilize unlabeled target samples for self-paced learning, we generate pseudo labels and design a cross-domain mixup method tailored for OSDA scenarios. This strategy minimizes the noise from pseudo labels and ensures our model progressively learns common class features of the target domain, beginning with simpler examples and advancing to more complex ones. Furthermore, unlike existing OSDA methods that require manual hyperparameter $threshold$ tuning to separate common and unknown classes, our approach self-tunes a suitable threshold, eliminating the need for empirical tuning during testing. Comprehensive experiments illustrate that our method consistently achieves superior performance on different benchmarks compared with various state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xinghong Liu (3 papers)
  2. Yi Zhou (438 papers)
  3. Tao Zhou (398 papers)
  4. Jie Qin (68 papers)
  5. Shengcai Liao (46 papers)
Citations (2)