Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for Semi-supervised Domain Generalization (2110.04820v2)

Published 10 Oct 2021 in cs.CV

Abstract: With the goal of directly generalizing trained model to unseen target domains, domain generalization (DG), a newly proposed learning paradigm, has attracted considerable attention. Previous DG models usually require a sufficient quantity of annotated samples from observed source domains during training. In this paper, we relax this requirement about full annotation and investigate semi-supervised domain generalization (SSDG) where only one source domain is fully annotated along with the other domains totally unlabeled in the training process. With the challenges of tackling the domain gap between observed source domains and predicting unseen target domains, we propose a novel deep framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels. Concretely, to predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed. Also, considering inconsistent goals between generalization and pseudo-labeling: former prevents overfitting on all source domains while latter might overfit the unlabeled source domains for high accuracy, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process. When accurate pseudo-labels are generated for unlabeled source domains, the domain mixup operation is applied to augment new domains between labeled and unlabeled domains, which is beneficial for boosting the generalization capability of the model. Extensive results on publicly available DG benchmark datasets show the efficacy of our proposed SSDG method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ruiqi Wang (62 papers)
  2. Lei Qi (84 papers)
  3. Yinghuan Shi (79 papers)
  4. Yang Gao (761 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.