Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Mechanism for Sample-Efficient In-Context Learning for Sparse Retrieval Tasks (2305.17040v1)

Published 26 May 2023 in cs.LG and cs.CL

Abstract: We study the phenomenon of \textit{in-context learning} (ICL) exhibited by LLMs, where they can adapt to a new learning task, given a handful of labeled examples, without any explicit parameter optimization. Our goal is to explain how a pre-trained transformer model is able to perform ICL under reasonable assumptions on the pre-training process and the downstream tasks. We posit a mechanism whereby a transformer can achieve the following: (a) receive an i.i.d. sequence of examples which have been converted into a prompt using potentially-ambiguous delimiters, (b) correctly segment the prompt into examples and labels, (c) infer from the data a \textit{sparse linear regressor} hypothesis, and finally (d) apply this hypothesis on the given test example and return a predicted label. We establish that this entire procedure is implementable using the transformer mechanism, and we give sample complexity guarantees for this learning framework. Our empirical findings validate the challenge of segmentation, and we show a correspondence between our posited mechanisms and observed attention maps for step (c).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jacob Abernethy (46 papers)
  2. Alekh Agarwal (99 papers)
  3. Teodor V. Marinov (14 papers)
  4. Manfred K. Warmuth (39 papers)
Citations (15)