Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ProtoDA: Efficient Transfer Learning for Few-Shot Intent Classification (2101.11753v1)

Published 28 Jan 2021 in cs.CL and cs.LG

Abstract: Practical sequence classification tasks in natural language processing often suffer from low training data availability for target classes. Recent works towards mitigating this problem have focused on transfer learning using embeddings pre-trained on often unrelated tasks, for instance, LLMing. We adopt an alternative approach by transfer learning on an ensemble of related tasks using prototypical networks under the meta-learning paradigm. Using intent classification as a case study, we demonstrate that increasing variability in training tasks can significantly improve classification performance. Further, we apply data augmentation in conjunction with meta-learning to reduce sampling bias. We make use of a conditional generator for data augmentation that is trained directly using the meta-learning objective and simultaneously with prototypical networks, hence ensuring that data augmentation is customized to the task. We explore augmentation in the sentence embedding space as well as prototypical embedding space. Combining meta-learning with augmentation provides upto 6.49% and 8.53% relative F1-score improvements over the best performing systems in the 5-shot and 10-shot learning, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Manoj Kumar (83 papers)
  2. Varun Kumar (35 papers)
  3. Hadrien Glaude (2 papers)
  4. Cyprien delichy (1 paper)
  5. Aman Alok (6 papers)
  6. Rahul Gupta (146 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.