Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modular Adaptation for Cross-Domain Few-Shot Learning (2104.00619v1)

Published 1 Apr 2021 in cs.CV

Abstract: Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples. While literature has demonstrated great successes via representation learning, in this work, we show that substantial performance improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process. Specifically, we propose a modular adaptation method that selectively performs multiple state-of-the-art (SOTA) adaptation methods in sequence. As different downstream tasks may require different types of adaptation, our modular adaptation enables the dynamic configuration of the most suitable modules based on the downstream task. Moreover, as an extension to existing cross-domain 5-way k-shot benchmarks (e.g., miniImageNet -> CUB), we create a new high-way (~100) k-shot benchmark with data from 10 different datasets. This benchmark provides a diverse set of domains and allows the use of stronger representations learned from ImageNet. Experimental results show that by customizing adaptation process towards downstream tasks, our modular adaptation pipeline (MAP) improves 3.1% in 5-shot classification accuracy over baselines of finetuning and Prototypical Networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiao Lin (181 papers)
  2. Meng Ye (47 papers)
  3. Yunye Gong (7 papers)
  4. Nikoletta Basiou (3 papers)
  5. Ajay Divakaran (43 papers)
  6. Yi Yao (49 papers)
  7. Giedrius Buracas (1 paper)
Citations (4)

Summary

We haven't generated a summary for this paper yet.