Papers
Topics
Authors
Recent
2000 character limit reached

StoryMem Framework: Modeling Narrative Memory

Updated 5 January 2026
  • StoryMem Framework is a formal and computational model that tracks and updates narrative memory using logical structures and incremental mechanisms.
  • It employs queryable update functions to align a Reader’s evolving memory with a Narrator’s intended story state in real time.
  • The framework introduces set-difference accuracy and entropy-based coherence metrics to quantify and monitor narrative consistency across multimedia and dialogue applications.

The StoryMem Framework refers to a class of formal and computational models for tracking, updating, and evaluating memory and coherence in narrative understanding and generation. Originating as a logical-cognitive architecture for story comprehension in (Castricato et al., 2021), the StoryMem paradigm has influenced recent developments in multimedia storytelling, conversational memory management, and evaluation protocols for narrative systems (&&&1&&&, Chen et al., 15 Sep 2025). It involves precise representations of a recipient’s (Reader’s) evolving memory of a narrative, incremental update mechanisms as new discourse is ingested, and explicit metrics for measuring alignment and coherence between intended (Narrator) and reconstructed (Reader) stories.

1. Core Objects and Formal Notation

The foundational StoryMem formalism uses model-theoretic structures in a logical language L\mathcal{L} to specify the evolving memory states of two agents:

  • Narrator (NN): Holds the true state of the story-world at each time tt, encoded in a model SN(t)S_N(t) with its associated theory S~N(t)\tilde S_N(t).
  • Reader (RR): Maintains a current story-world model SR(t)S_R(t) and theory S~R(t)\tilde S_R(t), representing explicit beliefs at time tt.

The fabula for each agent collects the narrative propositions being managed: FN(t)S~N(t)F_N(t) \subseteq \tilde S_N(t) for the Narrator and FR(t)S~R(t)F_R(t) \subseteq \tilde S_R(t) for the Reader. As the narrative unfolds, units of new information It=FN(t)FN(t1)I_t = F_N(t) \setminus F_N(t-1) are communicated. The Reader’s memory is best represented not as a single world-model, but as a plausible set SR(t)={MMFR(t)}\mathbf{S}_R(t) = \{ M \mid M \models F_R(t) \}.

Uncertainty is quantified via a filter Fw(t)\mathcal{F}_w(t) over P(SR(t))\mathcal{P}(\mathbf{S}_R(t)); in some cases, this can be further constrained to a weak ultrafilter UFw(t)\mathcal{UF}_w(t), yielding a structured view of Reader plausibility (Castricato et al., 2021).

2. Incremental Update Function and Querying

StoryMem posits an incremental, queryable procedure for memory evolution. At each time tt:

SR(t+1)=f(SR(t),It+1)S_R(t+1) = f\bigl(S_R(t),\,I_{t+1}\bigr)

with composition: FR(t+1)=FR(t)It+1 SR(t+1)={MMFR(t+1)}=SR(t)S(FR(t+1)) SR(t+1)=ψ(FR(t+1))\begin{aligned} F_R(t+1) &= F_R(t) \cup I_{t+1} \ \mathbf{S}_R(t+1) &= \{ M \mid M \models F_R(t+1) \} = \mathbf{S}_R(t) \cap \mathbf{S}(F_R(t+1)) \ S_R(t+1) &= \psi(F_R(t+1)) \end{aligned} The operator ζR\zeta_R formalizes fabula updates, while ψ\psi generates a (possibly aggregated) single model if needed.

To answer queries ("Does proposition qq hold?"), a probability is computed across sampled plausible worlds sFw(t)s' \subseteq \mathcal{F}_w(t). The verdict is returned as "Yes" if Ps(q)1P_{s'}(q) \approx 1, "No" if Ps(q)0P_{s'}(q) \approx 0, or "Undecided" otherwise (Castricato et al., 2021).

3. Information-Conveyance Accuracy Metrics

StoryMem introduces concrete mechanisms to quantify how closely the Reader’s reconstructed beliefs align with the Narrator’s intentions. The set-difference accuracy at time tt is: Acc(t)=1S~N(t)ΔS~R(t)S~N(t)S~R(t)[0,1]\mathrm{Acc}(t) = 1 - \dfrac{|\tilde S_N(t)\, \Delta\, \tilde S_R(t)|}{|\tilde S_N(t)\cup\tilde S_R(t)|} \in [0,1] where Δ\Delta denotes the symmetric difference.

When probabilistic world-samples are available, KL-divergence is also used:

DKL(PNPR)=qPN(q)logPN(q)PR(q)\mathrm{D_{KL}}(P_N \| P_R) = \sum_{q} P_N(q)\, \log \frac{P_N(q)}{P_R(q)}

These metrics enable real-time monitoring of the story-processing fidelity and guide adaptive clarification or inference-repair steps when accuracy drops (Castricato et al., 2021).

4. Entropy-Based Coherence Evaluation

Two novel entropy-derived coherence metrics, Entropy of World Coherence (EWC) and Entropy of Transitional Coherence (ETC), provide fine-grained insight into memory consistency and transition:

  • EWC:

    • Given a sampled world-set sSR(t)s' \subseteq \mathbf{S}_R(t) and proposition set QQ, define:

    Ps(q)={Ms:Mq}sP_{s'}(q) = \frac{|\{M \in s' : M \models q \}|}{|s'|}

    The EWC score is then

    EWC(s,Q)=1QqQPs(q)\mathrm{EWC}(s', Q) = \frac{1}{|Q|} \sum_{q \in Q} P_{s'}(q)

    High EWC (close to 1) indicates reader memory worlds generally agree for queries in QQ.

  • ETC:

    • Given pre- and post-update samples s(t)SR(t)s'(t') \subseteq \mathbf{S}_R(t') and actual worlds W(t)W(t) at time t>tt > t', implications qQq \in Q, set:

    Ps(t)(q)=PrMs(t)[Mq]P_{s'(t')}(q) = \Pr_{M \in s'(t')} [M \models q]

    The ETC:

    ETC(s(t),Q)=1QqQPs(t)(q)\mathrm{ETC}(s'(t'), Q) = \frac{1}{|Q|} \sum_{q \in Q} P_{s'(t')}(q)

    This quantifies how accurately Reader transitions reflect Narrator’s intended updates (Castricato et al., 2021).

5. Unified Coherence-Monitoring and Intervention Architecture

The StoryMem architecture brings together initialization, incremental ingestion, querying, accuracy and coherence monitoring, and adaptive intervention:

  1. Initialization: Set FR(0)=F_R(0) = \varnothing, SR(0)\mathbf{S}_R(0) to all models of the empty theory, and a wide plausibility filter.
  2. Ingestion Loop: At each step, receive It+1I_{t+1}, update via ζR\zeta_R, prune or focus Fw\mathcal{F}_w, and optionally select a single best model via ψ\psi.
  3. Memory Querying: Compute Ps(q)P_{s'}(q) for queries, using sampled plausible worlds.
  4. Accuracy and Coherence Monitoring: Regularly compute Acc(t)\mathrm{Acc}(t), EWC, and ETC.
  5. Adaptation: Below-threshold accuracy/coherence triggers clarification or inference-repair (e.g., contraction, revision).

This incremental, queryable design supports both cognitive modeling and real-time computational narrative systems (Castricato et al., 2021).

6. Influence on Memory-Driven Dialogue and Video Storytelling

StoryMem’s separation of narrative propositions, memory states, and coherence metrics has shaped related frameworks in long-form media and dialogue. In multi-shot long video generation, StoryMem has inspired architectures that condition each generated shot on a compact, dynamically updated memory bank of semantic keyframes (Zhang et al., 22 Dec 2025). These systems implement explicit memory-to-generation modules—such as latent concatenation, negative rotary positional embedding (RoPE) shifts, and memory sink mechanisms—to ensure prompt adherence and cross-shot consistency, evaluated by custom benchmarks (e.g., ST-Bench) and metrics drawing from StoryMem’s consistency logic.

In ultra-long dialogue scenarios, the MOOM framework adapts a dual-branch “memory plugin” in which separate branches handle hierarchical plot summarization and persona extraction, respectively. MOOM’s architecture invokes memory management and accuracy evaluation logic reminiscent of StoryMem, incorporating bounded memory buffers and explicit forgetting mechanisms to prevent uncontrolled growth and support memory querying and intervention (Chen et al., 15 Sep 2025).

7. Applications and Prospective Research Directions

The StoryMem framework underpins advances in computational narratology, interpretable AI memory systems, and the evaluation of narrative coherence in generated text and multimedia. Applications address:

  • Incremental story understanding and memory alignment in narrative intelligence
  • Coherence and long-range consistency in multi-turn dialogues and long-form video synthesis
  • Real-time monitoring, adaptation, and interactive clarification in memory-driven systems

Future research may extend StoryMem’s formalism to richer world representations, adaptive filtering schemes, and integration with neural memory components for scalable deployment in creative AI and memory-centric assistants.

References:

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to StoryMem Framework.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube