Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning (2110.03909v2)

Published 8 Oct 2021 in cs.LG and cs.CV

Abstract: In few-shot learning scenarios, the challenge is to generalize and perform well on new unseen examples when only very few labeled examples are available for each task. Model-agnostic meta-learning (MAML) has gained the popularity as one of the representative few-shot learning methods for its flexibility and applicability to diverse problems. However, MAML and its variants often resort to a simple loss function without any auxiliary loss function or regularization terms that can help achieve better generalization. The problem lies in that each application and task may require different auxiliary loss function, especially when tasks are diverse and distinct. Instead of attempting to hand-design an auxiliary loss function for each application and task, we introduce a new meta-learning framework with a loss function that adapts to each task. Our proposed framework, named Meta-Learning with Task-Adaptive Loss Function (MeTAL), demonstrates the effectiveness and the flexibility across various domains, such as few-shot classification and few-shot regression.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sungyong Baik (17 papers)
  2. Janghoon Choi (7 papers)
  3. Heewon Kim (12 papers)
  4. Dohee Cho (1 paper)
  5. Jaesik Min (3 papers)
  6. Kyoung Mu Lee (107 papers)
Citations (88)

Summary

We haven't generated a summary for this paper yet.