Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection (2312.07476v2)

Published 12 Dec 2023 in cs.CL and cs.AI

Abstract: In-Context Learning (ICL) is an important paradigm for adapting LLMs to downstream tasks through a few demonstrations. Despite the great success of ICL, the limitation of the demonstration number may lead to demonstration bias, i.e. the input-label mapping induced by LLMs misunderstands the task's essence. Inspired by human experience, we attempt to mitigate such bias through the perspective of the inter-demonstration relationship. Specifically, we construct Comparable Demonstrations (CDs) by minimally editing the texts to flip the corresponding labels, in order to highlight the task's essence and eliminate potential spurious correlations through the inter-demonstration comparison. Through a series of experiments on CDs, we find that (1) demonstration bias does exist in LLMs, and CDs can significantly reduce such bias; (2) CDs exhibit good performance in ICL, especially in out-of-distribution scenarios. In summary, this study explores the ICL mechanisms from a novel perspective, providing a deeper insight into the demonstration selection strategy for ICL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Caoyun Fan (8 papers)
  2. Jidong Tian (13 papers)
  3. Yitian Li (9 papers)
  4. Hao He (99 papers)
  5. Yaohui Jin (40 papers)
Citations (3)