Papers
Topics
Authors
Recent
Search
2000 character limit reached

Layered Memory Models: Multi-Scale Architectures

Updated 21 February 2026
  • Layered memory models are defined by distinct memory strata, each specialized in temporal, semantic, or functional roles for optimized retention and retrieval.
  • They integrate neural, algorithmic, and hardware techniques to enhance system efficiency, demonstrating measurable gains in accuracy, recall, and energy use.
  • Applications span from advanced AI agents and language models to cognitive architectures, offering robust multi-scale memory management for dynamic learning.

Layered memory models refer to neural, algorithmic, or system-level architectures in which memory is explicitly partitioned into distinct strata, each corresponding to a different timescale, semantic abstraction, or functional role. These models are foundational in AI, neuroscience-inspired architectures, language modeling, and memory hierarchy design, offering structured retention, prioritization, and retrieval of information for efficient learning, reasoning, or computation.

1. Taxonomies and Formal Definitions

Layered memory models are formulated as arrangements of memory modules, where each layer specializes in a subset of the overall memory function—temporal span, specificity, abstraction, or controllability. Multi-scale memory partitioning has been justified in cognitive science as mirroring human short-, middle-, and long-term memory (Li et al., 2023), as well as in computer systems via register/cache/DRAM hierarchies (0710.4656).

In contemporary deep learning and LLM systems:

  • Taxonomy: Modern frameworks split memory into parametric, contextual, external, and procedural/episodic layers, each described by a memory quadruple (loc,  pers,  write_path,  access/control)(\text{loc},\;\text{pers},\;\text{write\_path},\;\text{access/control}) (Zhang et al., 23 Sep 2025).
  • Cognitive architectures: Hierarchical ensembles (rote, type, script/scheduler) are mapped to biological and psychological models (Greer, 2020).
  • Associative and memory-layer models: Explicit multi-layer, key-value, or CMM stacks support massive, edit-friendly memory capacity and modular retrievability (Berges et al., 2024, Zanzotto et al., 18 Feb 2025, Krotov, 2021).

Memory-layer operations are governed by mechanisms for writing (ingest/update), reading (query/retrieval), and updating/inhibiting (edit/forgetting/replay), enabling flexible yet controllable access across layers (Zhang et al., 23 Sep 2025, Zhang et al., 16 Dec 2025).

2. Layered Memory Architectures: Mechanisms and Mathematical Formalism

Neural and Algorithmic Structures

  • Hierarchical Associative Memory (HAM):
    • LL layers, each with NAN_A neurons (states xi(A)x_i^{(A)}), symmetric feedforward and feedback weights ξ(A+1,A)\xi^{(A+1,A)}.
    • Dynamics minimize a Lyapunov energy E({x(A)})E(\{x^{(A)}\}); iterative updates converge to attractors representing stored memories.
    • Lower layers encode primitives/features; higher layers encode "assembling rules" (Krotov, 2021).
  • Layered Memory in LLM Agents:
    • TradingGPT employs STM, MTM, LTM. Memory events EE accrue a layer-specific ranking score:

    γiE=αiSRecencyE+βiSRelevancyE+λiSImportanceE\gamma_i^E = \alpha_i S_{\text{Recency}}^E + \beta_i S_{\text{Relevancy}}^E + \lambda_i S_{\text{Importance}}^E

    with explicit recency and semantic similarity weights, and exponential decay kernel Mt(i)=Mt−Δt(i)e−Δt/Qi+It(i)M_t^{(i)} = M_{t-\Delta t}^{(i)} e^{-\Delta t / Q_i} + I_t^{(i)} (Li et al., 2023).

  • Layered Latent State Reconstruction:

    • Contextual Memory Reweaving maintains a latent memory bank M(â„“)M^{(\ell)} at each transformer layer, fusing past hidden states via a learned attention and gating mechanism:

    ct(ℓ)=∑k∈Kt(ℓ)αt,k(ℓ)hk(ℓ)c_t^{(\ell)} = \sum_{k\in K_t^{(\ell)}} \alpha_{t,k}^{(\ell)} h_k^{(\ell)}

    h~t(ℓ)=gt(ℓ)∘ct(ℓ)+(1−gt(ℓ))∘ht(ℓ)\tilde{h}_t^{(\ell)} = g_t^{(\ell)} \circ c_t^{(\ell)} + (1-g_t^{(\ell)})\circ h_t^{(\ell)}

    (Dillon et al., 4 Feb 2025).

  • Associative Memory Layers:

    • MeMo and Memory Layers at Scale stack explicit key–value matrices across multiple layers:

    M=∑i=1Nk⃗i v⃗i⊤M = \sum_{i=1}^N \vec{k}_i\,\vec{v}_i^\top

    Query, attention, and retrieval are compositional, supporting multi-layer chunking for massively increased capacity at fixed per-layer parameter count (Zanzotto et al., 18 Feb 2025, Berges et al., 2024).

Systemic and Hardware Layering

  • Classical memory hierarchy: Analysis covers register/SRAM/DRAM multi-layered architectures, optimized by integer programming to maximize data reuse and minimize energy within hard capacity, bandwidth, and data lifetime constraints (0710.4656).

  • Compiler-level layering: Two-phase models separate infinite (idealized, unbounded) from finite (hardware-constrained) memory, with a formal refinement mapping preserving correctness and enabling modular optimizations (Beck et al., 2024).

3. Cognitive, Functional, and Biological Foundations

Layered memory models are directly influenced by established theories of human memory:

  • Cognitive Three-Layer Models: Synthesize instance-based (ensemble), type/semantic (concept-tree), and procedural (CPL/scheduler) memory, enforcing physiological constraints (counting rules, chunking) and formal mechanisms for activation, chunk coding, and cycle-based retrieval (Greer, 2020).

  • Self-organizing Visual Memories: Parts-based neural codes emerge from winner-take-all dynamics and bidirectional plasticity across at least two layers, successfully accounting for viewpoint-invariant recognition and efficient sparse representation (0905.2125).

  • Biological Plausibility in HAM: Energy-based, feedback-rich multi-layer associative networks implement inference and pattern completion using only local updates and symmetric connections, mapping to cortical circuits (Krotov, 2021).

The functional role of each memory layer often aligns with notions of working (transient), episodic (task or session), procedural (scripted or replayable), and semantic (consolidated) memory, each with characteristic temporal scope, persistence, and control mechanisms (Zhang et al., 16 Dec 2025, Zhang et al., 23 Sep 2025).

4. Layered Memory in LLMs

Recent LLM architectures structure memory hierarchically to address context limitations, long-term consistency, and robust adaptation:

  • Layer Taxonomies and Evaluation: Zhang et al. formalize parametric (weights), contextual (KV-cache), external (vector/DB), and procedural (logs/timeline) memory, each assessed via tailored metrics—accuracy, recall, faithfulness, timeliness, robustness—using a three-setting (PO, offline, online) protocol for comparability (Zhang et al., 23 Sep 2025).

  • CogMem Framework: Long-Term Memory (LTM) integrates cross-session strategies, Direct Access (DA) holds session facts, and Focus of Attention (FoA) dynamically reconstructs minimal context, formally tied via retrieval and consolidation equations; experimentally, FoA+DA+LTM yields 17%–33% accuracy improvements and 2× reduction in context tokens relative to flat approaches (Zhang et al., 16 Dec 2025).

  • Memory Layers at Scale: Memory layers add O(Nd)O(N d) parameters at nearly fixed computation by TopK key–value retrieval, outperforming dense FFNs and mixture-of-experts models for knowledge-rich tasks, with scaling laws observed up to $128$B parameters and smooth log-linear accuracy gains (Berges et al., 2024).

  • Associative Layering for Direct Memorization: MeMo and HAM stack explicit CMM/Modern Hopfield blocks, enabling transparent, editable, and high-capacity memorization, supporting direct inspection, chunking, and information deletion (Zanzotto et al., 18 Feb 2025, Krotov, 2021).

  • Memory Governance: Dynamic Memory Management frameworks (DMM-Gov) coordinate parametric, contextual, external, and episodic edits, with specification of admission thresholds, forgetting, rollback, and metric-based governance (Zhang et al., 23 Sep 2025).

5. Memory Dynamics: Write, Read, Update, and Forgetting

Layered memory systems universally implement a causal chain:

  • Write: Information ingested via gradient updates (weights), context injection (KV cache), or external indexing (vector DB), with layer-specific batch or online mechanisms.

  • Read: Retrieval via attention, semantic similarity, or layer-prioritized scoring; e.g., in TradingGPT, selection is by

γiE=αiSRecencyE+βiSRelevancyE+λiSImportanceE\gamma_i^E = \alpha_i S_{\text{Recency}}^E + \beta_i S_{\text{Relevancy}}^E + \lambda_i S_{\text{Importance}}^E

and in memory layers by

y=s⊤V[I]y = s^\top V[I]

where ss is the softmaxed TopK score vector (Berges et al., 2024, Li et al., 2023).

6. Experimental Evaluation and Scaling Laws

Layered memory models exhibit concrete empirical improvements:

  • Language Modeling:

    • Memory+ layers show factual QA accuracy scaling logarithmically with memory size, e.g., from 7.7%→20.8%7.7\% \to 20.8\% for NaturalQuestions as memory parameters increase from 0→1280 \to 128B, surpassing similarly-sized dense and MoE architectures (Berges et al., 2024).
    • Multi-layer CMMs in MeMo achieve up to 300K sequence recall at ≥0.96\geq0.96 for repeated/fake decoy spans with d=4096d=4096, l=3l=3 (Zanzotto et al., 18 Feb 2025).
    • CogMem enables sustained multi-turn accuracy up to 93% on TurnBench-MS, with <50% token growth compared to unlayered baselines (Zhang et al., 16 Dec 2025).
  • AI Agents and Reasoning:
    • TradingGPT's STM–MTM–LTM hierarchical system yields 20%20\% higher cumulative return and 15%15\% lower volatility compared to flat-memory baselines on ARK fund trading (Li et al., 2023).
  • Systems and Embedded Memory Hierarchies:
    • Layered allocation and prefetching reduces execution time by up to 60%60\% and energy by up to 70%70\% on industrial multimedia kernels (0710.4656).
  • Cognitive and Visual Tasks:
    • Hierarchical assemblies in recurrent visual memory models accelerate learning 33%33\% and improve generalization and sparsification of representations (0905.2125).

Scaling laws (log-linear accuracy vs parameter count), smooth capacity improvement, and robust mitigation of decay, drift, hallucination, and memory inefficiency have been consistently documented (Berges et al., 2024, Zhang et al., 16 Dec 2025, Dillon et al., 4 Feb 2025).

7. Governance, Evaluation Standards, and Future Propositions

Operational layering supports systematic evaluation, robust update/forgetting, and dynamic governance:

  • Unified protocols measure parametric (recall, editing, privacy), contextual (mid-span drop), external (retrieval/attribution), and procedural/episodic (timeline replay) memory via specifically aligned test sets and confidence intervals (Zhang et al., 23 Sep 2025).
  • DMM-Gov coordinates memory updates (PEFT, RAG, model editing), with rollout, rollback, audit logs, and governance certificates ensuring reproducibility and compliance.
  • Propositions include minimum identifiability of edited loci, minimally sufficient evaluation cards, causally-constrained editing, and retrieval+window strategies as superior to ultra-long context feeding under fixed resources (Zhang et al., 23 Sep 2025).
  • Open directions involve superlinear capacity scaling in hierarchical associative models, compositional chunking, and continual learning across memory strata.

Layered memory models thus provide a rigorous, multi-domain foundation for scalable, editable, and robust memory in neural networks, cognitive architectures, LLMs, and systems design, supported by strong theoretical formalism, diverse instantiations, and reproducible empirical gains.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Layered Memory Models.