Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 102 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning (2508.10419v1)

Published 14 Aug 2025 in cs.CL, cs.AI, and cs.LG

Abstract: Narrative comprehension on long stories and novels has been a challenging domain attributed to their intricate plotlines and entangled, often evolving relations among characters and entities. Given the LLM's diminished reasoning over extended context and high computational cost, retrieval-based approaches remain a pivotal role in practice. However, traditional RAG methods can fall short due to their stateless, single-step retrieval process, which often overlooks the dynamic nature of capturing interconnected relations within long-range context. In this work, we propose ComoRAG, holding the principle that narrative reasoning is not a one-shot process, but a dynamic, evolving interplay between new evidence acquisition and past knowledge consolidation, analogous to human cognition when reasoning with memory-related signals in the brain. Specifically, when encountering a reasoning impasse, ComoRAG undergoes iterative reasoning cycles while interacting with a dynamic memory workspace. In each cycle, it generates probing queries to devise new exploratory paths, then integrates the retrieved evidence of new aspects into a global memory pool, thereby supporting the emergence of a coherent context for the query resolution. Across four challenging long-context narrative benchmarks (200K+ tokens), ComoRAG outperforms strong RAG baselines with consistent relative gains up to 11% compared to the strongest baseline. Further analysis reveals that ComoRAG is particularly advantageous for complex queries requiring global comprehension, offering a principled, cognitively motivated paradigm for retrieval-based long context comprehension towards stateful reasoning. Our code is publicly released at https://github.com/EternityJune25/ComoRAG

Summary

  • The paper introduces ComoRAG, a framework that emulates human cognition to dynamically integrate new evidence with past knowledge for enhanced long narrative reasoning.
  • It leverages a hierarchical knowledge source and a metacognitive regulation loop that iteratively refines memory, achieving up to 11% performance gains over traditional RAG systems.
  • Ablation studies confirm the critical roles of the veridical layer and metacognitive processes, underscoring its adaptability across diverse narrative comprehension tasks.

ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning

Introduction

The paper "ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning" introduces an innovative framework for tackling the challenges of narrative comprehension in long stories and novels. Traditional LLMs struggle with reasoning over extended contexts due to diminished capacity and high computational costs. Retrieval-based approaches offer a practical alternative yet fall short due to their stateless and single-step nature. This work proposes ComoRAG, which aims to emulate human cognitive processes to overcome these limitations, enabling dynamic interplay between new evidence acquisition and past knowledge consolidation.

Framework and Methodology

Hierarchical Knowledge Source

ComoRAG constructs a multi-layered knowledge source akin to cognitive dimensions in the human brain, enabling deep contextual understanding:

  • Veridical Layer: Grounded in factual evidence with raw text chunks and knowledge triples, enhancing retrieval effectiveness.
  • Semantic Layer: Abstracts thematic structures using semantic clustering, as developed in the RAPTOR framework, for superior context abstraction.
  • Episodic Layer: Captures narrative flow and plotline through episodic representations, facilitating temporal and causal comprehension.

Metacognitive Regulation Loop

Central to ComoRAG is its Metacognitive Regulation Loop, comprising:

  • Dynamic Memory Workspace: Memory units created after each retrieval operation serve as evolving knowledge states aiding deeper reasoning.
  • Regulatory and Metacognitive Processes: The loop includes planning new probes strategically, retrieving evidence, and synthesizing memory cues, enabling iterative, stateful reasoning.

By iteratively probing and updating memory, ComoRAG aims to address inherent challenges in understanding complex narrative queries that require global plot comprehension. Figure 1

Figure 1: Comparison of RAG reasoning paradigms.

Experimental Evaluation

ComoRAG was evaluated on four narrative comprehension datasets, demonstrating consistent performance gains over strong RAG baselines by up to 11%.

Performance Metrics and Analysis:

  • ComoRAG achieves enhancements particularly in complex queries that necessitate global comprehension, marking significant gains in F1 score and accuracy across datasets.
  • The process efficiently converges within 2-3 reasoning cycles, highlighting its effectiveness in iterative probing and memory consolidation. Figure 2

    Figure 2: An illustration of ComoRAG. Triggered by a reasoning impasse (Failure), the Metacognitive Regulation loop consists of five core operations described in the methodology section.

Ablation Studies

A series of ablation studies confirmed the contribution of each component within ComoRAG:

  • Hierarchical Knowledge Source: Removing the Veridical layer led to a nearly 30% accuracy drop, underscoring its significance.
  • Metacognition: The absence of this module resulted in significant performance degradation, revealing its essential role in managing dynamic memory.
  • Regulation: Disabling this module affected retrieval efficiency, highlighting its importance in directing meaningful probing queries.

(Table 1)

Table 1: Evaluation results on four long narrative comprehension datasets. ComoRAG consistently outperforms baselines.

Future Directions and Conclusion

ComoRAG exhibits profound implications for long narrative reasoning in AI:

  • Scalability and Generalization: The framework's cognitive-inspired loop efficiently adapts to different LLM backbones, improving baseline performance substantially.
  • Application in Complex Narrative Tasks: Its dynamic and modular design facilitates a generalized approach to complex narrative comprehension, offering opportunities for integration in existing systems.

ComoRAG represents a promising step forward for retrieval-based systems, offering a new paradigm aligned with cognitive processes for stateful long-context reasoning. Future advancements may explore integrating even more sophisticated LLMs to further enhance reasoning capabilities and broadening the framework's applicability to diverse narrative structures.

In conclusion, ComoRAG addresses the intricate demands of narrative comprehension, offering a cognitive-inspired, principled approach to retrieving and synthesizing information across long narratives. This positions it as a notable advancement in addressing stateful reasoning challenges within LLM contexts.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 posts and received 46 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube