Long-term Memory for LLM-based Agents

Develop long-term memory mechanisms for large language model-based agents that enable reliable retention and retrieval over extended interactions and support continual learning within agentic systems.

Background

Within the discussion of LLMs as core components of advanced AI agents, the paper highlights current limitations in planning robustness, tool selection, and memory. In particular, the authors note that long-term memory capabilities are not yet adequately solved.

The text further emphasizes that existing paradigms do not readily support continual learning, underscoring the need for research into memory architectures and learning procedures that can operate over extended horizons and evolving tasks in agentic environments.

References

Additionally, long-term memory represents an open research problem, and the current paradigm does not readily support continual learning.

Intelligent AI Delegation  (2602.11865 - Tomašev et al., 12 Feb 2026) in Section “Previous Work on Delegation” (LLMs paragraph)