Dice Question Streamline Icon: https://streamlinehq.com

Mechanisms underlying in-context learning in large language models

Characterize the mechanisms that enable large language models to perform in-context learning during inference without any additional weight updates when examples are provided in the prompt.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper studies how transformer-based models might achieve in-context learning (ICL) by introducing contextual blocks that generalize transformer blocks. It proves that contextual layers can induce implicit low-rank updates to the first MLP layer based on the prompt, proposing a concrete mechanism for implicit adaptation at inference time.

Despite presenting this mechanism and empirical verification in a controlled setting, the authors emphasize that the broader underlying mechanisms enabling ICL in LLMs are not fully understood, which motivates the explicit open problem stated in the abstract.

References

One of the most striking features of LLMs (LLM) is their ability to learn in context. Namely at inference time an LLM is able to learn new patterns without any additional weight update when these patterns are presented in the form of examples in the prompt, even if these patterns were not seen during training. The mechanisms through which this can happen are still largely unknown.

Learning without training: The implicit dynamics of in-context learning (2507.16003 - Dherin et al., 21 Jul 2025) in Abstract