Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning (2205.05282v3)

Published 11 May 2022 in cs.CV

Abstract: Cross-domain few-shot learning (CD-FSL), where there are few target samples under extreme differences between source and target domains, has recently attracted huge attention. Recent studies on CD-FSL generally focus on transfer learning based approaches, where a neural network is pre-trained on popular labeled source domain datasets and then transferred to target domain data. Although the labeled datasets may provide suitable initial parameters for the target data, the domain difference between the source and target might hinder fine-tuning on the target domain. This paper proposes a simple yet powerful method that re-randomizes the parameters fitted on the source domain before adapting to the target data. The re-randomization resets source-specific parameters of the source pre-trained model and thus facilitates fine-tuning on the target domain, improving few-shot performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jaehoon Oh (18 papers)
  2. Sungnyun Kim (19 papers)
  3. Namgyu Ho (10 papers)
  4. Jin-Hwa Kim (42 papers)
  5. Hwanjun Song (44 papers)
  6. Se-Young Yun (114 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.