Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning (2109.04144v1)

Published 9 Sep 2021 in cs.CL and cs.AI

Abstract: Recent prompt-based approaches allow pretrained LLMs to achieve strong performances on few-shot finetuning by reformulating downstream tasks as a LLMing problem. In this work, we demonstrate that, despite its advantages on low data regimes, finetuned prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inference heuristics based on lexical overlap, e.g., models incorrectly assuming a sentence pair is of the same meaning because they consist of the same set of words. Interestingly, we find that this particular inference heuristic is significantly less present in the zero-shot evaluation of the prompt-based model, indicating how finetuning can be destructive to useful knowledge learned during the pretraining. We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning. Our evaluation on three datasets demonstrates promising improvements on the three corresponding challenge datasets used to diagnose the inference heuristics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Prasetya Ajie Utama (6 papers)
  2. Nafise Sadat Moosavi (38 papers)
  3. Victor Sanh (21 papers)
  4. Iryna Gurevych (264 papers)
Citations (33)