Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Learning across Meta-Tasks for Few-Shot Learning (2002.04274v4)

Published 11 Feb 2020 in cs.LG and stat.ML

Abstract: Existing meta-learning based few-shot learning (FSL) methods typically adopt an episodic training strategy whereby each episode contains a meta-task. Across episodes, these tasks are sampled randomly and their relationships are ignored. In this paper, we argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning. Specifically, we consider the relationships defined over two types of meta-task pairs and propose different strategies to exploit them. (1) Two meta-tasks with disjoint sets of classes: this pair is interesting because it is reminiscent of the relationship between the source seen classes and target unseen classes, featured with domain gap caused by class differences. A novel learning objective termed meta-domain adaptation (MDA) is proposed to make the meta-learned model more robust to the domain gap. (2) Two meta-tasks with identical sets of classes: this pair is useful because it can be employed to learn models that are robust against poorly sampled few-shots. To that end, a novel meta-knowledge distillation (MKD) objective is formulated. There are some mistakes in the experiments. We thus choose to withdraw this paper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nanyi Fei (14 papers)
  2. Zhiwu Lu (51 papers)
  3. Yizhao Gao (19 papers)
  4. Jia Tian (27 papers)
  5. Tao Xiang (324 papers)
  6. Ji-Rong Wen (299 papers)
Citations (10)