Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Effectiveness of Pre-training for Few-shot Intent Classification (2109.05782v2)

Published 13 Sep 2021 in cs.CL

Abstract: This paper investigates the effectiveness of pre-training for few-shot intent classification. While existing paradigms commonly further pre-train LLMs such as BERT on a vast amount of unlabeled corpus, we find it highly effective and efficient to simply fine-tune BERT with a small set of labeled utterances from public datasets. Specifically, fine-tuning BERT with roughly 1,000 labeled data yields a pre-trained model -- IntentBERT, which can easily surpass the performance of existing pre-trained models for few-shot intent classification on novel domains with very different semantics. The high effectiveness of IntentBERT confirms the feasibility and practicality of few-shot intent detection, and its high generalization ability across different domains suggests that intent classification tasks may share a similar underlying structure, which can be efficiently learned from a small set of labeled data. The source code can be found at https://github.com/hdzhang-code/IntentBERT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haode Zhang (7 papers)
  2. Yuwei Zhang (48 papers)
  3. Li-Ming Zhan (10 papers)
  4. Jiaxin Chen (55 papers)
  5. Guangyuan Shi (8 papers)
  6. Xiao-Ming Wu (91 papers)
  7. Albert Y. S. Lam (34 papers)
Citations (41)