Meta-in-context learning in large language models (2305.12907v1)
Abstract: LLMs have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their success. In the present paper, we demonstrate that the in-context learning abilities of LLMs can be recursively improved via in-context learning itself. We coin this phenomenon meta-in-context learning. Looking at two idealized domains, a one-dimensional regression task and a two-armed bandit task, we show that meta-in-context learning adaptively reshapes a LLM's priors over expected tasks. Furthermore, we find that meta-in-context learning modifies the in-context learning strategies of such models. Finally, we extend our approach to a benchmark of real-world regression problems where we observe competitive performance to traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting LLMs to the environment they are applied purely through meta-in-context learning rather than traditional finetuning.
- Julian Coda-Forno (6 papers)
- Marcel Binz (30 papers)
- Zeynep Akata (144 papers)
- Matthew Botvinick (30 papers)
- Jane X. Wang (21 papers)
- Eric Schulz (33 papers)