Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 59 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

LightMem: Scalable Memory for LLMs

Updated 22 October 2025
  • LightMem is a memory system that segments and compresses LLM dialogue into sensory, short-term, and long-term modules.
  • It employs a multi-stage architecture to improve reasoning accuracy by up to 10.9% while dramatically reducing token usage and API calls.
  • Empirical evaluations on LongMemEval show efficiency gains with reductions of up to 117x in token usage and 159x fewer API calls.

LightMem is a memory system for LLMs designed to achieve high reasoning accuracy and dramatic efficiency improvements in long-range, multi-turn interaction scenarios. It introduces a multi-stage memory architecture inspired by the Atkinson–Shiffrin model, featuring lightweight sensory memory for compression and topic segmentation, topic-aware short-term memory for structured summarization, and a long-term memory module with sleep-time (offline) consolidation. Empirical results on the LongMemEval benchmark using GPT and Qwen backbones demonstrate LightMem’s ability to improve accuracy by up to 10.9% while reducing token usage and API calls by over two orders of magnitude, compared to strong baselines (Fang et al., 21 Oct 2025).

1. Model Motivation and Memory System Overview

Conventional memory systems for LLMs often introduce significant computational and resource overheads, particularly as context length and interaction depth grow. These systems generally operate on raw dialogue data, resulting in large input sizes, irrelevant redundancy, and suboptimal performance in dynamic or multi-topic settings. LightMem addresses these limitations by structuring memory into three stages that mirror components of human memory:

  • Sensory Memory: Rapidly compresses and filters raw input, discarding irrelevant or redundant tokens and segmenting content by topic.
  • Short-Term Memory (STM): Organizes filtered input into manageable, topic-based segments, which are summarized and stored as structured entries.
  • Long-Term Memory (LTM): Performs consolidation and deduplication of STM entries, but crucially decouples this from online inference by relegating it to offline "sleep-time" periods.

This pipeline enables LLMs to preserve essential semantic information, maintain long-term coherence, and significantly reduce resource requirements in interactive and retrieval-intensive workloads.

2. Architectural Design and Module Functions

The core mechanics of LightMem are distributed across three synergistic modules:

A. Sensory Memory

  • Pre-Compression Submodule: Employs LLMLingua-2, a dedicated compression module, to assign retention probabilities to each token in a sequence xx:

P(retain xix;θ)=softmax(i)1P(\text{retain } x_i \mid x; \theta) = \mathrm{softmax}(\ell_i)_1

Tokens are retained if P(retain xix;θ)>τP(\text{retain } x_i \mid x; \theta) > \tau, with τ\tau set at the rr-th percentile to achieve the intended compression ratio.

  • Topic Segmentation Submodule: Determines topic boundaries by combining attention matrix criteria (identifying local maxima in attention scores between adjacent dialogue turns) with semantic similarity calculations:
    • Boundary set by attention:

    B1={kMk,k1>Mk1,k2,Mk,k1>Mk+1,k,1<k<n}B_1 = \{k \mid M_{k,k-1} > M_{k-1,k-2}, M_{k,k-1} > M_{k+1,k}, 1 < k < n \} - Boundary set by semantic similarity:

    B2={ksim(sk1,sk)<τ}B_2 = \{k \mid \text{sim}(s_{k-1}, s_k) < \tau \} - Final boundaries: B=B1B2B = B_1 \cap B_2

B. Short-Term Memory

  • Each topic segment SiS_i is summarized using an LLM-based summarizer fsumf_\text{sum}:

sumi=fsum(Si)\text{sum}_i = f_\text{sum}(S_i)

Structured entries are stored as:

Entryi={topic, embedding(sumi), user, assistant}\text{Entry}_i = \{\text{topic},\ \text{embedding}(\text{sum}_i),\ \text{user},\ \text{assistant}\}

This organization supports efficient semantic retrieval and reduces redundant API calls during downstream memory access.

C. Long-Term Memory and Sleep-Time Update

  • Memory entries are "soft-updated" by simple appending during online inference, postponing computationally intensive operations.

  • During designated sleep periods, consolidation is performed in parallel:

    • For a memory entry eie_i (with embedding viv_i, timestamp tit_i):

    Q(ei)=Topk{(ej, sim(vi,vj))tjti, ji}Q(e_i) = \text{Top}_k \{ (e_j,\ \text{sim}(v_i, v_j)) \mid t_j \geq t_i,\ j \neq i \}

This queue-driven parallel process de-duplicates and merges entries offline, ensuring that online response time is unaffected by maintenance of memory consistency.

3. Empirical Performance and Efficiency Analysis

Evaluation using LongMemEval with GPT-4o-mini and Qwen3-30B-A3B-Instruct backbones yields the following results relative to state-of-the-art baselines (A-MEM, MemoryOS, Mem0):

Metric Max Improvement Factor (GPT/Qwen Backbones)
QA Accuracy Gain Up to 10.9% / 7.67%
Token Usage Reduction by up to 117x
API Calls Reduction by up to 159x
Runtime Reduction by over 12x

These empirical figures indicate that LightMem achieves the dual objectives of increasing context-aware reasoning performance while sharply lowering computational cost during extended interaction periods. This is attributed to aggressive token compression, topic-based grouping, and deferred consolidation.

4. Technical Workflow and Formulations

LightMem operational flow can be summarized as follows:

  1. Token Pre-Compression

    • For input sequence x=[x1,...,xn]x = [x_1, ..., x_n], calculate per-token retention probability using softmax over layer scores, and retain tokens above percentile threshold τ\tau.
  2. Topic Segmentation
    • Apply hybrid criteria of local attention maxima and semantic dissimilarity to define boundaries BB.
  3. Summarization and STM Entry Formation
    • For segmented topics, LLM summarizer generates sumisum_i, entries are structured for retrieval.
  4. Long-Term Memory Update
    • At inference, entries are appended; during sleep time, update queues are generated for each eie_i:
      • Top-kk similar entries are merged, processed in parallel to minimize latency impact during online use.

5. Applications and Deployment Implications

LightMem is directly applicable to LLM-driven systems requiring long-horizon memory and efficient retrieval, including:

  • Conversational agents with hundreds of turns, avoiding context window overflow and redundant context window consumption.
  • Customer support bots and virtual assistants, where token budget and API cost are enterprise-critical.
  • Scenarios demanding both high fidelity context retention and low-latency online updating, as the expensive memory consolidation is confined to offline phases.

A crucial aspect of deployment is the decoupling of memory management from live session response time, achieved by relegating intensive summary merging and deduplication to offline “sleep” periods.

6. Future Directions

Proposed avenues for extension include:

  • Offline Update Acceleration: Use of precomputed key–value caches to further accelerate sleep-time consolidation.
  • Knowledge Graph-Augmented Memory: Integration of lightweight semantic graph structures to support compositional (multi-hop) reasoning.
  • Multimodal Extension: Expanding memory to cross-modal scenarios (visual, auditory, textual) for embodied agent tasks.
  • Hybrid Parametric/Non-Parametric Memory: Improved synergy between LLM’s parametric memory and the explicit LightMem non-parametric repository for robust and interpretable context retention.

These directions suggest LightMem may evolve into a foundational approach for extending LLMs into settings requiring efficient, scalable, and reliable long-term memory across modalities.

7. Significance and Positioning

By systematically structuring memory filters, topic segmentation, succinct summarization, and offline consolidation, LightMem offers a methodologically grounded solution for the challenges of scalable LLM memory augmentation. Its design adheres to principles observed in human cognition and sets a precedent for decoupling online response efficiency from memory maintenance. The demonstrated reductions in token and API usage, with parallel or improved accuracy, mark a substantial advance in practical memory-augmented generation for LLMs (Fang et al., 21 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to LightMem.