Enabling LLMs to Improve Through Experience

Determine effective approaches for enabling large language models to improve their performance through accumulated experience from prior tasks and interactions.

Background

The paper highlights that while LLMs demonstrate strong reasoning and code-generation capabilities, a key challenge is how to enable these models to improve based on prior experience, rather than relying solely on single-pass prompting or costly parameter updates.

The authors discuss reinforcement learning as one mechanism for agent improvement but note its computational and data intensity. They motivate lightweight, training-free strategies such as Reflexion and their proposed Multi-Agent Reflexion (MAR) as steps toward addressing the broader open problem of experience-driven improvement in LLMs.

References

LLMs has evolved to generate strong reasoning traces and high-quality code, but enabling them to improve through experience remains an open problem.

MAR:Multi-Agent Reflexion Improves Reasoning Abilities in LLMs (2512.20845 - Ozer et al., 23 Dec 2025) in Introduction, first paragraph