Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 94 tok/s Pro
Kimi K2 214 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

LongMemEval: Benchmark for LLM Memory

Updated 22 October 2025
  • LongMemEval is a benchmark that evaluates long-term interactive memory in LLM chat assistants through realistic, multi-session dialogues.
  • It measures five key memory abilities—information extraction, multi-session reasoning, temporal reasoning, knowledge updates, and abstention.
  • The framework integrates modular indexing, retrieval, and reading optimizations to enhance memory recall and answer synthesis in extended interactions.

LongMemEval is a large-scale benchmark designed for systematic evaluation of long-term, interactive memory in LLM-driven chat assistants. By embedding memory-intensive queries within freely scalable, task-oriented multi-session chat histories, LongMemEval enables rigorous assessment of five key long-term memory abilities: information extraction, multi-session reasoning, temporal reasoning, knowledge updates, and abstention. It establishes a challenging and realistic data regime for longitudinal memory benchmarking and provides a unified experimental framework, including optimizations for indexing, retrieval, and reading. The dataset, experimental protocols, and code are publicly available to facilitate advances in memory-augmented LLM systems.

1. Dataset Design and Structure

LongMemEval is constructed to emulate realistic prolonged user–assistant interaction scenarios. Each test sample comprises:

  • A lengthy, ordered sequence of session tuples (t1,S1),(t2,S2),,(tN,SN)(t_1, S_1), (t_2, S_2), \dots, (t_N, S_N), where each SiS_i is a multi-turn, task-oriented dialogue and tit_i is an associated timestamp.
  • One corresponding question paired with a unique answer that requires evidence from the provided history.

Key dataset properties include:

  • Scale: The standard LongMemEval_S configuration contains histories of approximately 115,000 tokens per instance, while LongMemEval_M features up to 500 sessions and 1.5 million tokens per instance.
  • Data Generation: Sessions are generated using a “self-chat plus human editing” pipeline. Questions are designed via an attribute-controlled curation process ensuring diverse memory requirements.
  • Needle-in-Haystack Metric: Questions embed relevant evidence indirectly among distractor content, simulating real difficulty in memory retrieval.
  • Access: Data and code are publicly released at https://github.com/xiaowu0162/LongMemEval.

2. Memory Abilities Evaluated

LongMemEval is organized explicitly to benchmark five core abilities required for robust long-term interactive memory in LLM-based chat assistants:

Ability Description
Information Extraction (IE) Recall of explicit, specific details mentioned during user–assistant interactions
Multi-Session Reasoning (MR) Aggregation and synthesis of evidence distributed across several distinct sessions
Temporal Reasoning (TR) Interpretation and manipulation of temporal information, including timestamps and inferred times
Knowledge Updates (KU) Correct handling of updates—tracking and revising user/assistant knowledge changes over time
Abstention (ABS) Controlled refusal to answer when necessary evidence is absent or question is ill-posed

These categories reflect not only low-level memory (fact recall) but also higher-level cognitive phenomena necessary for personalized and temporally coherent AI interaction.

3. Challenges, Baseline Performance, and Degradation

LongMemEval reveals that state-of-the-art commercial and open-source chat models—when tasked with multi-session memory recall in contexts scaled beyond prior benchmarks—suffer pronounced performance drops:

  • Accuracy decreases by 30–60% vis-à-vis an oracle retrieval setting (i.e., if presented with only the evidence).
  • Notable failure cases include loss of detail in the “middle” of a long chat sequence (“lost-in-the-middle”), failure to aggregate temporally distributed information, and over-reliance on shallow pattern matching.
  • Empirical evaluation of models such as ChatGPT and Coze demonstrates that scaling interaction length consistently exacerbates these challenges.
  • Even when advanced dense retrievers are integrated, correct synthesis of evidence remains a limiting factor on end-to-end performance.

These results document the depth of the challenge posed by realistic longitudinal memory requirements and question the sufficiency of naïve long-context capabilities for real interactive agents.

4. Unified Memory Framework and Optimizations

LongMemEval formalizes a modular architecture for LLM-based chat assistant memory, partitioned into three stages:

  1. Indexing: Each session (or its sub-components) is converted into one or more key–value memory entries.
  2. Retrieval: Given a query, a retrieval mechanism fetches the top-kk candidate memory items from the memory store.
  3. Reading: The LLM generates a final answer by reasoning over the content of retrieved memory entries.

Within this framework, LongMemEval introduces and evaluates several key optimizations:

  • Session Decomposition (Value Granularity): Granularity of storage is compared at three levels: whole session, round, and per-user-fact. Decomposing to rounds or extracting user facts notably boosts retrieval-augmented generation performance; for MR queries, further fact extraction is necessary.
  • Fact-Augmented Key Expansion (Indexing): Rather than indexing by raw text, extracted user facts are concatenated to the value, yielding multiple key “paths” that improve retrieval recall (by ~4% recall@k) and downstream QA accuracy (by ~5%).
  • Time-Aware Query Expansion (Retrieval): For temporally grounded queries, an LLM parses the time range from the question and restricts retrieval to relevant session blocks—enhancing recall by 7–11% on temporal reasoning tasks.
  • Reading Strategies (Synthesis): Employing Chain-of-Note (CoN) reasoning—extracting key supporting evidence from each retrieved memory and prompting in a structured JSON format—yields an absolute improvement of up to 10 percentage points in answer accuracy.

5. Experimental Analysis and Results

Experiments in LongMemEval employ several model classes—commercial LLMs (e.g., GPT–4o), high-capacity open-source models (Llama 3.1-Instruct, Phi–3 Medium), and state-of-the-art dense retrievers (Stella V5 1.5B):

  • Key Results:
    • Long-context and memory-augmented LLMs exhibit 30–60% accuracy drop relative to the oracle.
    • Granularity optimization—decomposing to rounds and using fact augmentation—improves recall and NDCG in retrieval, as well as downstream QA.
    • For temporal reasoning, time-restricted retrieval delivers measurable increases in success rate.
    • Enhanced reading (CoN & structured prompts) is essential for synthesizing scattered retrievals, raising correctness up to 10 points.
  • Ablation Studies: Systematically demonstrate each aforementioned optimization’s contribution, enabling diagnostic understanding of failure modes.

These benchmarks quantitatively substantiate the value of architectural decisions in memory-augmented systems.

The methodology and findings of LongMemEval have substantial implications:

  • Towards Realistic Memory Benchmarks: Unlike prior long-context QA datasets limited to passive or “needle-in-haystack” retrieval, LongMemEval introduces a realistic multi-session conversation regime, emphasizing temporal evolution, knowledge change, and abstention.
  • Unified Memory Design Schema: The tripartite framework—indexing, retrieval, and reading—has been adopted or extended in recent works on reflective memory management (Tan et al., 11 Mar 2025) and continual learning evaluation (Ai et al., 20 Oct 2025).
  • Research Drivers: The persistent performance gap relative to oracle retrieval underscores ongoing challenges in scaling retrieval and reasoning, and the necessity to optimize LLM-based agents for dynamic, evolving user interaction histories rather than static QA.

A plausible implication is the need for further architectural innovation in memory store representations, retrieval methods, and strategies for information synthesis under high noise and temporal drift.

7. Dataset Release and Future Directions

All LongMemEval data, annotation protocols, and experimental code are open source at https://github.com/xiaowu0162/LongMemEval. The benchmark’s extensible design supports ongoing research on:

  • New retrieval mechanisms and hybrid memory models
  • Advanced temporal and logical reasoning in long-term dialogue
  • Techniques for compressive or hierarchical memory representations

LongMemEval provides a standardized, challenging testbed for the longitudinal memory dimensions of LLM-based conversational agents, catalyzing progress in both academic and applied research into long-lived, personalized AI assistants.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to LongMemEval Dataset.