Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning (2305.14160v4)

Published 23 May 2023 in cs.CL and cs.LG

Abstract: In-context learning (ICL) emerges as a promising capability of LLMs by providing them with demonstration examples to perform diverse tasks. However, the underlying mechanism of how LLMs learn from the provided context remains under-explored. In this paper, we investigate the working mechanism of ICL through an information flow lens. Our findings reveal that label words in the demonstration examples function as anchors: (1) semantic information aggregates into label word representations during the shallow computation layers' processing; (2) the consolidated information in label words serves as a reference for LLMs' final predictions. Based on these insights, we introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL. The promising applications of our findings again validate the uncovered ICL working mechanism and pave the way for future studies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lean Wang (10 papers)
  2. Lei Li (1293 papers)
  3. Damai Dai (38 papers)
  4. Deli Chen (20 papers)
  5. Hao Zhou (351 papers)
  6. Fandong Meng (174 papers)
  7. Jie Zhou (687 papers)
  8. Xu Sun (194 papers)
Citations (133)