Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 440 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Think-on-Graph 3.0: Adaptive Graph Reasoning

Updated 29 September 2025
  • Think-on-Graph 3.0 is a retrieval-augmented generation framework that employs a dual-evolution MACER mechanism to adaptively refine both queries and evidence sub-graphs.
  • It builds a multi-resolution heterogeneous graph index by integrating sentence chunks, semantic triplets, and community clusters to enhance precision in evidence retrieval.
  • The framework’s multi-agent system—comprising Reflector, Constructor, Retriever, and Responser agents—collaboratively iterates to improve deep multi-hop query answering.

Think-on-Graph 3.0 (ToG‑3) is a retrieval-augmented generation (RAG) framework for efficient and adaptive reasoning over heterogeneous graphs, designed to support deep and broad query answering by LLMs, including lightweight and locally deployed variants. ToG‑3 introduces the Multi-Agent Context Evolution and Retrieval (MACER) mechanism, which enables dynamic construction and continual refinement of a heterogeneous graph index comprising sentence chunks, semantic triplets, and communities. The framework departs from previous approaches that utilize single-pass, static graph indices by employing a dual-evolving process: both the query and the evidence sub-graph are iteratively refined according to the ongoing reasoning, guided by a system of cooperating agents.

1. Dual-Evolution MACER Mechanism

The MACER mechanism forms the core of ToG‑3’s innovation by coupling two adaptive processes:

Evolving Query: The Reflector Agent decomposes complex queries into focused sub-queries as reasoning unfolds. Initial evidence from the graph may only partially answer the main query; the agent generates more targeted sub-queries to retrieve missing information, such as specific relationships or attributes that need clarification. This enables lightweight LLMs to break down complex reasoning tasks without succumbing to information bottlenecks or hallucination.

Evolving Sub-Graph: Parallel to query refinement, the Constructor Agent incrementally updates the candidate evidence graph. The Retriever Agent first collects a preliminary sub-graph from the universal index using vector-based retrieval that spans chunks, triplets, and communities. Based on new sub-queries, the Constructor Agent adds salient triplets or prunes irrelevant nodes, iteratively refining the sub-graph for better precision and sufficiency. The joint evolution is formalized as an episodic Markov Decision Process (MDP) over state tuples (query, sub-graph, query history), enabling theoretical analysis of convergence and efficiency.

The loop continues until the Reflector Agent judges, via sparse binary rewards, that the current context is sufficient to answer the query (i.e., Suff(q,𝒢′) = 1), minimizing the final retrieved sub-graph size:

Gq=argminGGG  subject to  Suff(q,G)\mathcal{G}^*_q = \arg\min_{\mathcal{G}' \subseteq \mathcal{G}} |\mathcal{G}'| \;\text{subject to}\; Suff(q,\mathcal{G}')

2. Heterogeneous Graph Index Construction

ToG‑3 builds a multi-resolution heterogeneous graph index composed of:

  • Chunks: The text corpus is divided into sentence-level chunks, serving as basic textual units.
  • Triplets: Semantic triplets are extracted from each chunk via an LLM or other extractor, encoding entity–relation–entity information for fine-grained retrieval.
  • Communities: Entity co-occurrence graphs, formed from accumulated triplets, are clustered using the Leiden algorithm to produce coherent communities. Each community receives an abstract, vector-embedded summary for higher-level retrieval and semantic grouping.

Unlike previous static approaches, which construct graph indices once and use uniform retrieval for all queries, ToG‑3 constructs a universal heterogeneous index offline (using a frozen encoder for unified embedding) and dynamically adapts the retrieved sub-graph at runtime in response to ongoing query modifications. This improves retrieval efficiency, adaptivity, and the depth/breadth of evidence incorporated into the reasoning process.

3. Multi-Agent Collaborative Reasoning

ToG‑3 orchestrates four collaborating agents:

  • Constructor Agent: Extracts triplets from initial context, builds and updates the sub-graph by integrating new evidence and discarding redundant components at each iteration.
  • Retriever Agent: Executes initial multi-level retrieval from the universal graph index using vector-based similarity measures, ensuring that returned sub-graphs span chunk, triplet, and community levels.
  • Reflector Agent: Evaluates the sufficiency of the current evidence sub-graph to answer the query. If deemed inadequate, the agent generates targeted follow-up sub-queries, determining the course of subsequent retrieval and graph evolution.
  • Responser Agent: Synthesizes the final answer using the complete trajectory of retrieved evidence and sub-queries, halting only when the Reflector Agent declares sufficiency.

These agents interact in a closed-loop, iteratively adapting both the query representation and the sub-graph context. The use of an explicit agent system ensures modularity and transparency for complex reasoning workflows, and facilitates expansion to multimodal or domain-specific settings.

4. Experimental Performance and Evaluation

ToG‑3 achieves state-of-the-art (SOTA) results on deep and broad reasoning datasets, including HotpotQA, 2WikiMultihopQA, and Musique, outperforming baseline methods such as NaiveRAG, ToG-2.0, GraphRAG, and LightRAG. The ablation studies reveal:

  • Removing evolving query decomposition results in the most significant performance loss (12.6% EM and 17.9% F1 drop), demonstrating its critical role in breaking down complex, multi-hop reasoning.
  • Eliminating dynamic sub-graph refinement leads to additional accuracy decrements, underscoring the necessity of query-adaptive evidence retrieval.

The framework’s MACER mechanism successfully retrieves minimal sufficient sub-graphs for answering the original user query, enhancing both the precision and depth of reasoning without expensive full-corpus processing.

5. Practical Applications and Implications

ToG‑3 provides robust solutions for knowledge-intensive question answering, particularly in domains demanding factual precision and multistep reasoning (e.g., legal, biomedical, financial). Its efficient retrieval and reasoning capabilities support use cases in:

  • Resource-Constrained Environments: The architecture is compatible with lightweight LLMs, enabling high-grade reasoning on local or offline systems.
  • Transparent Evidence Tracking: The dual-evolving mechanism produces explicit reasoning trajectories, facilitating interpretability, error correction, and auditing.
  • Multimodal and Domain Expansion: The abstraction of graph index construction and agent interaction lends itself to further integration of structured data and multimodal evidence, enlarging the scope of retrieval-augmented reasoning.

A plausible implication is that ToG‑3’s principled dual-evolution and agent system address the scaling and adaptivity limitations of prior graph-based RAG frameworks, offering theoretical convergence guarantees and improved reasoning transparency.

6. Relationship to Prior Work

ToG‑3 expands upon previous retrieval-augmented and graph-based methods, including ToG-2.0 (Ma et al., 15 Jul 2024), FastToG (Liang et al., 24 Jan 2025), PoG (Tan et al., 18 Oct 2024), and related frameworks, by introducing dynamic, query-specific sub-graph evolution and multi-agent collaboration. Its heterogeneous graph index unifies chunk, triplet, and community-level evidence in a single vector space, while the MACER mechanism guarantees more adaptive and consistent retrieval. Ablation and comparative studies, as reported in (Wu et al., 26 Sep 2025), confirm clear empirical advantages over prior single-pass, static graph index approaches.

7. Framework Diagram

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[User Query q]
      │
      ▼
[Retriever Agent] — retrieves → [Initial Sub‑Graph 𝒢₀ (Chunks, Triplets, Communities)]
      │
      ▼
[Responser Agent] — synthesizes preliminary answer
      │
      ▼
[Reflector Agent] — evaluates sufficiency
      │ 
      └─ if insufficient → [generate sub‑query q′]
                              │
                              ▼
                    [Constructor Agent] — refines sub‑graph 𝒢₁
This iterative process continues until sufficiency is met, followed by final answer synthesis.


ToG‑3 thus defines an adaptive, multi-resolution framework for LLM reasoning over heterogeneous graph indices, demonstrated to improve deep and broad multi-hop query answering, transparency, and resource efficiency relative to previous paradigms (Wu et al., 26 Sep 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Think-on-Graph 3.0 (ToG-3).