Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-in-context learning in large language models (2305.12907v1)

Published 22 May 2023 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their success. In the present paper, we demonstrate that the in-context learning abilities of LLMs can be recursively improved via in-context learning itself. We coin this phenomenon meta-in-context learning. Looking at two idealized domains, a one-dimensional regression task and a two-armed bandit task, we show that meta-in-context learning adaptively reshapes a LLM's priors over expected tasks. Furthermore, we find that meta-in-context learning modifies the in-context learning strategies of such models. Finally, we extend our approach to a benchmark of real-world regression problems where we observe competitive performance to traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting LLMs to the environment they are applied purely through meta-in-context learning rather than traditional finetuning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Julian Coda-Forno (6 papers)
  2. Marcel Binz (30 papers)
  3. Zeynep Akata (144 papers)
  4. Matthew Botvinick (30 papers)
  5. Jane X. Wang (21 papers)
  6. Eric Schulz (33 papers)
Citations (29)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com