Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain-Agnostic Few-Shot Classification by Learning Disparate Modulators (1909.04999v2)

Published 11 Sep 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Although few-shot learning research has advanced rapidly with the help of meta-learning, its practical usefulness is still limited because most of them assumed that all meta-training and meta-testing examples came from a single domain. We propose a simple but effective way for few-shot classification in which a task distribution spans multiple domains including ones never seen during meta-training. The key idea is to build a pool of models to cover this wide task distribution and learn to select the best one for a particular task through cross-domain meta-learning. All models in the pool share a base network while each model has a separate modulator to refine the base network in its own way. This framework allows the pool to have representational diversity without losing beneficial domain-invariant features. We verify the effectiveness of the proposed algorithm through experiments on various datasets across diverse domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yongseok Choi (7 papers)
  2. Junyoung Park (37 papers)
  3. Subin Yi (4 papers)
  4. Dong-Yeon Cho (5 papers)

Summary

We haven't generated a summary for this paper yet.