Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey
This presentation explores how memory mechanisms transform AI agents from static benchmark performers into adaptive systems capable of operating in dynamic, real-world environments. We examine the three-dimensional framework organizing agent memory—substrate, cognitive mechanism, and subject—and reveal how structured memory operations enable agents to retain, organize, and exploit information across long-horizon tasks. The talk demonstrates why memory is foundational for the next generation of AI agents working in education, healthcare, scientific research, and beyond.Script
When an AI agent completes one task brilliantly but forgets everything by the next conversation, we are witnessing the single biggest barrier between benchmark performance and real-world utility. Memory is not just a feature for AI agents, it is the foundation that separates static responders from adaptive problem solvers.
The authors identify a critical gap. As AI agents move from controlled benchmarks into messy reality, they encounter something language models alone cannot solve: the need to remember, prioritize, and evolve across interactions. A single conversation is no longer enough.
The researchers propose organizing memory along three fundamental dimensions.
This framework reveals how agents can structure what they remember. The substrate determines where information lives. The cognitive mechanism shapes how memories are organized, whether as specific episodes, general concepts, or learned procedures. The subject decides whose information matters, whether the agent is optimizing its own behavior or adapting to individual users.
Memory is not passive storage. The paper demonstrates how agents actively manage memory through operations that balance retention with efficiency. On the storage side, summarization and structured formats prevent information overload. On the retrieval side, agents learn to pull exactly what is relevant for each decision, and in multi-agent settings, they coordinate memory sharing to enable collaborative problem solving.
This diagram captures the evolution of memory learning policies. Prompting gives agents basic memory instructions, but remains imprecise. Fine-tuning narrows memory decisions using task-specific data. Reinforcement learning pushes further, enabling agents to discover optimal memory strategies through trial and reward. The progression shows how agents transition from rule-following to adaptive, self-improving memory management.
Memory transforms agents from tools that answer questions into systems that learn, adapt, and evolve alongside us. The shift from static benchmarks to dynamic memory is not incremental, it redefines what AI agents can become. Visit EmergentMind.com to explore this research further and create your own video presentations.