Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs (2504.15965v2)

Published 22 Apr 2025 in cs.IR

Abstract: Memory is the process of encoding, storing, and retrieving information, allowing humans to retain experiences, knowledge, skills, and facts over time, and serving as the foundation for growth and effective interaction with the world. It plays a crucial role in shaping our identity, making decisions, learning from past experiences, building relationships, and adapting to changes. In the era of LLMs, memory refers to the ability of an AI system to retain, recall, and use information from past interactions to improve future responses and interactions. Although previous research and reviews have provided detailed descriptions of memory mechanisms, there is still a lack of a systematic review that summarizes and analyzes the relationship between the memory of LLM-driven AI systems and human memory, as well as how we can be inspired by human memory to construct more powerful memory systems. To achieve this, in this paper, we propose a comprehensive survey on the memory of LLM-driven AI systems. In particular, we first conduct a detailed analysis of the categories of human memory and relate them to the memory of AI systems. Second, we systematically organize existing memory-related work and propose a categorization method based on three dimensions (object, form, and time) and eight quadrants. Finally, we illustrate some open problems regarding the memory of current AI systems and outline possible future directions for memory in the era of LLMs.

Summary

Memory Mechanisms in AI Systems: An In-Depth Analysis

Memory mechanisms have increasingly become pivotal in AI systems driven by LLMs, offering pathways to enhanced personalization, adaptability, and cognitive functionality. "From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs" provides a comprehensive exploration of these mechanisms, directly correlating the intricacies of human cognition with the architecture of memory systems in AI. The authors propose a novel framework for categorizing AI memory across multiple dimensions, drawing compelling parallels between human and AI memory.

Theoretical Foundations

The survey begins by exploring the neuroscientific understanding of human memory, classified into short-term and long-term categories. It proceeds to map these distinctions onto AI systems, illustrating how mechanisms like sensory memory, working memory, explicit memory, and implicit memory serve as functional analogues in AI. This provides foundational insights into how AI systems can emulate human-like memory processes for encoding, storage, and retrieval.

Memory Taxonomy in AI

A central contribution of the paper is the introduction of a three-dimensional, eight-quadrant (3D-8Q) taxonomy for AI memory. This classification system organizes memory based on object (personal and system), form (non-parametric and parametric), and time (short-term and long-term). Such a structured approach facilitates more systematic exploration of memory systems in AI, enhancing the design and implementation strategies for adaptive, learning-oriented models.

Personal and System Memory

The paper distinguishes between personal and system memory, addressing their unique roles and implementations:

  1. Personal Memory: Focuses on user-related interactions, leveraging both non-parametric short-term contextual memory and long-term memory retrieval-augmented generation. Personal memory systems aim to improve user experience through personalized adaptations in dialogue and recommendation engines.
  2. System Memory: Covers the intermediate procedural outputs generated during AI task execution. By emphasizing reasoning and self-reflection, system memory facilitates the dynamic evolution of AI systems, enabling them to handle more complex tasks through iterative learning and memory refinement.

Challenges and Future Directions

Though LLM-driven AI systems have made substantial progress in memory integration, the paper recognizes several outstanding challenges:

  • Multimodal Memory: Transition from unimodal to multimodal systems to enhance perceptual capabilities across text, images, and audio.
  • Stream Memory: Move from static to continuous memory models to prioritize real-time adaptability.
  • Comprehensive Memory Systems: Pursue integrated memory architectures resembling human cognitive processes to enhance adaptability and responsiveness.
  • Shared Memory Paradigms: Enable inter-model collaboration and multi-domain knowledge sharing to improve task-solving capabilities.
  • Privacy Concerns: Address shifting privacy dynamics in large-scale data sharing, encompassing both individual and collective privacy perspectives.
  • Automated Evolution: Advance AI systems towards self-directed, rule-free evolution of cognitive capabilities.

Conclusion

This paper offers crucial insights into the relationship between human memory and AI systems. By comprehensively defining various types of memory, proposing a taxonomy for their exploration, and highlighting key research areas, it not only contributes to the ongoing dialogue in AI research but also sets a clear direction for future advancements. It is evident that as AI systems continue to evolve, fostering intelligent and adaptive memory architectures will be central to their development and application across diverse, real-world scenarios.

Youtube Logo Streamline Icon: https://streamlinehq.com