Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization (2205.07208v3)

Published 15 May 2022 in cs.CL

Abstract: It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations. Recent studies have shown that fine-tuning pre-trained LLMs with a small amount of labeled utterances from public benchmarks in a supervised manner is extremely helpful. However, we find that supervised pre-training yields an anisotropic feature space, which may suppress the expressive power of the semantic representations. Inspired by recent research in isotropization, we propose to improve supervised pre-training by regularizing the feature space towards isotropy. We propose two regularizers based on contrastive learning and correlation matrix respectively, and demonstrate their effectiveness through extensive experiments. Our main finding is that it is promising to regularize supervised pre-training with isotropization to further improve the performance of few-shot intent detection. The source code can be found at https://github.com/fanolabs/isoIntentBert-main.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Haode Zhang (7 papers)
  2. Haowen Liang (9 papers)
  3. Yuwei Zhang (48 papers)
  4. Liming Zhan (7 papers)
  5. Xiao-Ming Wu (91 papers)
  6. Xiaolei Lu (13 papers)
  7. Albert Y. S. Lam (34 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.