Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty (2202.01339v3)

Published 1 Feb 2022 in cs.LG

Abstract: Cross-domain few-shot learning (CD-FSL) has drawn increasing attention for handling large differences between the source and target domains--an important concern in real-world scenarios. To overcome these large differences, recent works have considered exploiting small-scale unlabeled data from the target domain during the pre-training stage. This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain. In this paper, we empirically investigate which pre-training is preferred based on domain similarity and few-shot difficulty of the target domain. We discover that the performance gain of self-supervised pre-training over supervised pre-training becomes large when the target domain is dissimilar to the source domain, or the target domain itself has low few-shot difficulty. We further design two pre-training schemes, mixed-supervised and two-stage learning, that improve performance. In this light, we present six findings for CD-FSL, which are supported by extensive experiments and analyses on three source and eight target benchmark datasets with varying levels of domain similarity and few-shot difficulty. Our code is available at https://github.com/sungnyun/understanding-cdfsl.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jaehoon Oh (18 papers)
  2. Sungnyun Kim (19 papers)
  3. Namgyu Ho (10 papers)
  4. Jin-Hwa Kim (42 papers)
  5. Hwanjun Song (44 papers)
  6. Se-Young Yun (114 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.