Papers
Topics
Authors
Recent
2000 character limit reached

Multiple Memory Systems for Enhancing the Long-term Memory of Agent (2508.15294v2)

Published 21 Aug 2025 in cs.AI, cs.CL, and cs.MA

Abstract: An agent powered by LLMs have achieved impressive results, but effectively handling the vast amounts of historical data generated during interactions remains a challenge. The current approach is to design a memory module for the agent to process these data. However, existing methods, such as MemoryBank and A-MEM, have poor quality of stored memory content, which affects recall performance and response quality. In order to better construct high-quality long-term memory content, we have designed a multiple memory system (MMS) inspired by cognitive psychology theory. The system processes short-term memory to multiple long-term memory fragments, and constructs retrieval memory units and contextual memory units based on these fragments, with a one-to-one correspondence between the two. During the retrieval phase, MMS will match the most relevant retrieval memory units based on the user's query. Then, the corresponding contextual memory units is obtained as the context for the response stage to enhance knowledge, thereby effectively utilizing historical data. Experiments on LoCoMo dataset compared our method with three others, proving its effectiveness. Ablation studies confirmed the rationality of our memory units. We also analyzed the robustness regarding the number of selected memory segments and the storage overhead, demonstrating its practical value.

Summary

  • The paper introduces a multi-memory system (MMS) that processes short-term memory into episodic, semantic, and cognitive fragments, significantly improving long-term recall.
  • It leverages cognitive psychology theories to design both retrieval and contextual memory units, outperforming traditional models like MemoryBank and A-MEM in complex reasoning tasks.
  • Experimental evaluations on the LoCoMo dataset show that MMS enhances performance metrics such as Recall@N, F1 Score, and BLEU-1, underscoring its robustness against noise.

Multiple Memory Systems for Enhancing the Long-term Memory of Agent

Introduction

This paper introduces a novel approach to enhancing long-term memory (LTM) in LLM-based agents by designing a Multiple Memory System (MMS). This system is inspired by cognitive psychology theories, which propose that memory is not a monolithic structure but consists of diverse subsystems each responsible for different types of information processing. Unlike traditional models like MemoryBank and A-MEM, which exhibit deficiencies in stored memory quality, this approach seeks to improve memory recall and response quality by processing short-term memory (STM) into various long-term memory fragments, including episodic and semantic memories. This multi-memory fragment strategy not only enhances recall but also improves agent responses by effectively utilizing historical data through retrieval and contextual memory units. Figure 1

Figure 1: Schematic of multi-memory system process: After acquiring short-term memory, MMS processes it into memory fragments and constructs retrieval units and contextual memory units.

Theoretical Framework

Cognitive Psychology and Memory Systems

Drawing on Tulving's theory of multiple memory systems, the MMS treats human memory as comprising procedural, semantic, and episodic components, each activated under different cognitive tasks. The Levels of Processing Theory suggests memory formation depends on the depth of information processing, while the Encoding Specificity Principle indicates effective retrieval is contextually dependent. The MMS translates these theories into practice by generating high-quality LTM fragments—keywords, cognitive perspectives, episodic, and semantic memory—that improve retrieval efficacy and knowledge enhancement during generation tasks.

Existing architectures like MemoryBank store dialogue content by extracting keywords and summaries as memory units, whereas A-MEM builds knowledge via keyword extraction and summarization. However, these models often miss nuanced context retrieval, affecting recall quality. MMS addresses this through its multi-faceted approach, ensuring retrieval units align more closely with queries, thereby expanding the semantic reach and recalling performance.

The MMS Framework

Construction of Long-term Memory Units

MMS processes the content of a dialogue (STM) into diverse long-term memory representations: MkeyM_{\text{key}}, McogM_{\text{cog}}, MepiM_{\text{epi}}, and MsemM_{\text{sem}}. These fragments are used to construct retrieval memory units (MUretMU_{\text{ret}}) essential for high-precision relevance matching, and contextual memory units (MUcontMU_{\text{cont}}), which enrich knowledge output during generation. The architecture exploits both low-level keyword extraction and high-level semantic analysis to form robust query alignment mechanisms. Figure 2

Figure 2: Compare the impact on performance after adding other segments. In terms of recall metrics, MMS and MMS+Sem were compared.

Memory Retrieval and Utilization

The retrieval phase involves converting user queries into vectors and selecting the top-k memory segments based on cosine similarity for enhanced response generation by mapping MUretMU_{\text{ret}} to MUcontMU_{\text{cont}}. This meticulously structured memory integration ensures that contextually relevant content is fed back into LLMs, significantly enhancing agent responses under diverse querying scenarios.

Experimental Evaluation

Setup and Results

Employing the LoCoMo dataset, evaluations were conducted using metrics like Recall@N, F1 Score, and BLEU-1, across tasks that include single-hop, multi-hop, and temporal reasoning. MMS outperformed baselines (NaiveRAG, MemoryBank, A-MEM) notably in multi-hop and open-domain questions, due to its adept multi-level integration of memory fragments. The system displayed robust performance, especially in high-complexity reasoning tasks.

Discussion

Ablation and Robustness Analysis

Ablation studies demonstrate the critical role of diverse memory segments in query handling efficiency, reaffirming the necessity of varied cognitive perspectives in LTM construction. Furthermore, robustness testing against memory fragment variation confirmed MMS's capacity for maintaining high-quality content retrieval despite potential noise.

Implications for AI Memory Systems

By integrating cognitive psychology insights with advanced LTM structuring, MMS paves the way for future exploration into memory system design in AI contexts. Such a multi-dimensional approach potentially sets a foundation for more human-like adaptive learning mechanisms within AI agents, continually improving interaction quality and autonomy.

Conclusion

The MMS approach demonstrates strong utility in enhancing recall and generation capabilities of AI agents, showcasing that cognitive-theory-based memory systems can significantly improve LTM quality. Through innovative memory fragmentation and retrieval strategies, MMS adapts sophisticated cognitive processes to practical computational models, offering a scalable, high-performance solution to memory challenges in LLM-based agents. Future work will explore larger-scale integration of memory operations to further align AI capabilities with human cognition models.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.