Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reiterative Domain Aware Multi-Target Adaptation (2109.00919v2)

Published 26 Aug 2021 in cs.CV

Abstract: Most domain adaptation methods focus on single-source-single-target adaptation settings. Multi-target domain adaptation is a powerful extension in which a single classifier is learned for multiple unlabeled target domains. To build a multi-target classifier, it is important to have: a feature extractor that generalizes well across domains; and effective aggregation of features from the labeled source and different unlabeled target domains. Towards the first, we use the recently popular Transformer as a feature extraction backbone. Towards the second, we use a co-teaching-based approach using a dual-classifier head, one of which is based on the graph neural network. The proposed approach uses a sequential adaptation strategy that adapts one domain at a time starting from the target domains that are more similar to the source, assuming that the network finds it easier to adapt to such target domains. After adapting on each target, samples with a softmax-based confidence score greater than a threshold are added to the pseudo-source, thus aggregating knowledge from different domains. However, softmax is not entirely trustworthy as a confidence score and may generate a high score for unreliable samples if trained for many iterations. To mitigate this effect, we adopt a reiterative approach, where we reduce target adaptation iterations, however, reiterate multiple times over the target domains. The experimental evaluation on the Office-Home, Office-31 and DomainNet datasets shows significant improvement over the existing methods. We have achieved 10.7$\%$ average improvement in Office-Home dataset over the state-of-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sudipan Saha (19 papers)
  2. Shan Zhao (32 papers)
  3. Nasrullah Sheikh (7 papers)
  4. Xiao Xiang Zhu (201 papers)

Summary

We haven't generated a summary for this paper yet.