Papers
Topics
Authors
Recent
2000 character limit reached

Graph Counselor: Adaptive Multi-Agent GraphRAG

Updated 4 November 2025
  • Graph Counselor is a multi-agent framework that adaptively extracts hierarchical graph data to improve LLM reasoning over complex queries.
  • It employs coordinated Planning, Thought, and Execution agents to dynamically determine retrieval depth and accurately capture multi-level relationships.
  • The framework’s iterative self-reflection mechanism mitigates semantic and structural errors, ensuring robust performance on multi-hop question answering.

Graph Counselor is a multi-agent graph retrieval-augmented generation (GraphRAG) framework for enhancing LLM reasoning over knowledge graphs. By introducing adaptive, hierarchical graph information extraction and iterative self-reflection, Graph Counselor addresses fundamental limitations in previous static or single-agent GraphRAG methodologies, resulting in substantial improvements in factual accuracy, reasoning robustness, and efficiency for complex graph QA tasks (Gao et al., 4 Jun 2025).

1. Motivation and Problem Formulation

Conventional GraphRAG methods integrate structured knowledge into LLM workflows by retrieving or traversing subgraphs to provide factual context. However, these systems are subject to two primary limitations:

  1. Inefficient Information Aggregation: Existing approaches typically employ a single agent with fixed, shallow extraction patterns, restricting their ability to capture multi-level and compositional relationships inherent in real-world graph data.
  2. Rigid Reasoning Depth: Prior frameworks rely on static, pre-set traversal schemes, which cannot dynamically adapt reasoning depth or structure. This leads to either under-reasoning (missing critical context) or over-reasoning (retrieving irrelevant information), and cannot recover from semantic or structural reasoning errors.

The core objective of Graph Counselor is to introduce adaptive, multi-agent graph exploration combined with self-reflective reasoning, enabling LLMs to both plan and dynamically refine their knowledge aggregation process for graph-based question answering and reasoning.

2. Architecture: Multi-Agent Adaptive Graph Information Extraction

The cornerstone of Graph Counselor is the Adaptive Graph Information Extraction Module (AGIEM), which decomposes graph reasoning into coordinated, compositional interactions between three distinct agent roles:

a. Planning Agent

  • Input: Receives the user query or prior reasoning context.
  • Function: Analyzes semantic intent and decomposes the problem into a high-level reasoning plan—defining subgoals, dependencies, and an initial reasoning path.

b. Thought Agent

  • Input: Receives the output of the Planning Agent.
  • Function: Determines the precise graph information required for current subgoal resolution, focusing downstream retrieval on relevant neighborhoods and relations to avoid spurious or shallow explorations.

c. Execution Agent

  • Input: Operates according to the subplans and targets specified by the prior agents.
  • Function: Executes adaptive, compositional queries using a suite of fundamental operators:
    • Retrieve(t)→Iv\mathrm{Retrieve}(t) \rightarrow I_v: Semantic node retrieval by text.
    • Feature(Iv,Tv)→fvt\mathrm{Feature}(I_v, \mathcal{T}_v) \rightarrow f_{vt}: Node attribute extraction.
    • Neighbor(Iv,r)\mathrm{Neighbor}(I_v, r): Edge-based neighbor expansion for specified relations.
    • Degree(Iv,r)\mathrm{Degree}(I_v, r): Structural node degree queries.

Operators can be recursively composed, allowing both breadth- and depth-first explorations as dictated by evolving reasoning needs: X={Pj(G):Pj=Pj1∘…∘Pjk, Pji∈F}\mathcal{X} = \left\{ \mathcal{P}_j(G) : \mathcal{P}_j = \mathcal{P}_{j1} \circ \ldots \circ \mathcal{P}_{jk},\ \mathcal{P}_{ji} \in \mathcal{F} \right\} where F\mathcal{F} is the set of primitive graph extraction functions.

The three agents interact in a sequential, context-updating loop at each reasoning step, with outputs passed as structured context to the next agent and subsequent reasoning rounds.

3. Self-Reflection: Multi-Perspective Error Correction

To mitigate semantic inconsistencies and correct reasoning errors, Graph Counselor incorporates a Self-Reflection with Multiple Perspectives (SR) module. The SR process involves:

  • Reviewing the semantic and structural reasoning trace.
  • Identifying gaps, redundancies, or inconsistencies via guided questions and recapitulations.
  • Analyzing misalignments between extracted graph data and semantic interpretation.
  • Suggesting, through backward reasoning or multi-path exploration, how to refine context, extraction strategy, or path planning.
  • Looping at most twice for efficiency, updating reasoning context before re-invoking the AGIEM.

This reflective intervention is crucial for complex, multi-hop queries or when semantic/pragmatic ambiguities arise in entity references or relationship aggregation.

4. Algorithmic Workflow

The overall reasoning workflow is structured as follows:

  1. For a user query qq:
    • Initialize the context.
    • For up to TT reasoning steps: Planning →\rightarrow Thought →\rightarrow Execution, updating context at each step.
    • If an answer is produced or iterations exhausted, proceed; otherwise, invoke SR to reflect, critique, and update the context.
    • Re-run AGIEM as necessary, bounded by an outer loop on number of reflections NN (typically 2).
  2. Return the final generated answer, incorporating all extracted knowledge and reasoning chains.

Prompts and role-switching are orchestrated via LLM-internal control and explicit prompt engineering, detailed in the appendix of (Gao et al., 4 Jun 2025).

5. Empirical Evaluation and Ablation Analysis

a. Datasets and Benchmarks

  • GRBENCH: 1,740 questions over 10 real-world graphs from Academic, E-commerce, Literature, Healthcare, and Legal domains, stratified by complexity (single-hop, multi-hop, inductive).
  • WebQSP: Standard KGQA for generalization validation.

b. Baseline Comparisons

  • Standalone LLM (no external retrieval)
  • TextRAG (text-only retrieval)
  • GraphRAG (1-hop, 2-hop, KG-based retrieval)
  • Graph-CoT (state-of-the-art chain-of-thought graph reasoning)

c. Reasoning and Retrieval Backbone

  • AGIEM operators leveraging node/attribute search (Mpnet-v2 + FAISS), edge traversal, degree calculation.
  • LLMs evaluated: Mixtral-8x7B-Instruct, Mistral-NeMo-Instruct-2407, Qwen2.5-7B, Qwen2.5-72B, Llama-3.1-70B, Gemma-2-9B.

d. Results

Model RL (GRBENCH Overall) RL Improvement vs. SOTA QS LS
Graph-CoT Baseline — Baseline Baseline
Graph Counselor up to +24.2% +24.2% +8-12% +8-12%

Graph Counselor outperforms all baselines on RL (Rouge-L), QwenScore (QS), and LlamaScore (LS), especially on medium and hard graph QA tasks.

Ablation findings:

  • Planning Agent: Removing it resulted in up to 6.1% drop in medium/hard question accuracy.
  • Execution Agent with restricted compositionality: Up to 3.6% drop.
  • SR Module removal: Up to 7.26% degradation; most important for challenging, ambiguous, or multi-hop scenarios.
  • Number of Reflections: Two iterations optimized efficiency and performance.

A notable efficiency result: Graph Counselor with Gemma-2-9B outperformed Graph-CoT with Llama-3.1-70B at just ~14% of the computation cost on E-commerce QA.

6. Qualitative and Practical Significance

The multi-agent and reflectively adaptive architecture confers several important advantages:

  • Hierarchical Information Extraction: AGIEM enables fine-grained, multi-hop, and recursively structured retrieval and aggregation, overcoming the brittleness and shallowness of fixed-extraction RAG.
  • Dynamic Reasoning Depth: Planning and Thought agents can adapt how far/hop-deep the LLM explores, tailored to semantic and structural demands of each question.
  • Multi-Perspective Reflection: Backward analysis, alternative-path probing, and correction trigger diagnosis and recovery from errors typically unrecoverable in static systems.
  • Generalization and Efficiency: The approach is robust across domains, with strong performance even from smaller backbone LLMs due to architectural efficiency.
  • Reproducibility: Code and resources are open-sourced at https://github.com/gjq100/Graph-Counselor.

7. Summary Table: Comparative Features

Aspect Generic GraphRAG Graph Counselor
Agent Structure Single-agent Multi-agent (Planning/Thought/Execution)
Information Extraction Fixed, shallow Hierarchical, compositional, adaptive
Reasoning Adaptability Rigid Dynamic, self-reflective, context-updating
Reflection/Error Correction Absent/minimal Iterative, multi-perspective SR mechanism
Multi-hop/Long Reasoning Limited Adaptive, efficient even for deep queries
Open Source Varies Yes

8. Conclusion

Graph Counselor defines the state of the art in LLM-augmented graph exploration and QA by combining adaptive, multi-agent planning and compositional information extraction with a robust, iterative self-reflection framework. This results in substantial gains in both reasoning accuracy and efficiency, especially for hard graph QA queries, and provides a generalizable blueprint for next-generation GraphRAG and graph-assisted intelligent assistants (Gao et al., 4 Jun 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Graph Counselor.