Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Graph of Thoughts: Test-Time Adaptive Reasoning Unifying Chain, Tree, and Graph Structures (2502.05078v1)

Published 7 Feb 2025 in cs.AI and cs.CL

Abstract: LLMs have demonstrated impressive reasoning capabilities, yet their performance is highly dependent on the prompting strategy and model scale. While reinforcement learning and fine-tuning have been deployed to boost reasoning, these approaches incur substantial computational and data overhead. In this work, we introduce Adaptive Graph of Thoughts (AGoT), a dynamic, graph-based inference framework that enhances LLM reasoning solely at test time. Rather than relying on fixed-step methods like Chain of Thought (CoT) or Tree of Thoughts (ToT), AGoT recursively decomposes complex queries into structured subproblems, forming an dynamic directed acyclic graph (DAG) of interdependent reasoning steps. By selectively expanding only those subproblems that require further analysis, AGoT unifies the strengths of chain, tree, and graph paradigms into a cohesive framework that allocates computation where it is most needed. We validate our approach on diverse benchmarks spanning multi-hop retrieval, scientific reasoning, and mathematical problem-solving, achieving up to 46.2% improvement on scientific reasoning tasks (GPQA) - comparable to gains achieved through computationally intensive reinforcement learning approaches and outperforming state-of-the-art iterative approaches. These results suggest that dynamic decomposition and structured recursion offer a scalable, cost-effective alternative to post-training modifications, paving the way for more robust, general-purpose reasoning in LLMs.

The paper "Adaptive Graph of Thoughts: Test-Time Adaptive Reasoning Unifying Chain, Tree, and Graph Structures" introduces the Adaptive Graph of Thoughts (AGoT), a novel dynamic, graph-based inference framework for enhancing the reasoning capabilities of LLMs solely at test time. The framework seeks to address the limitations of traditional methods such as Chain of Thought (CoT) and Tree of Thoughts (ToT) by utilizing a directed acyclic graph (DAG) to recursively decompose complex queries into structured subproblems. This allows for selective expansion of subproblems that require further exploration, effectively unifying strengths of chain, tree, and graph paradigms and allowing for computation allocation where it is most needed.

Key Contributions and Results:

  • Framework Design: AGoT deviates from fixed-step methods by dynamically constructing a directed acyclic graph for organizing interdependent reasoning steps. This design enables a more adaptable and generalizable inference strategy compared to CoT and ToT.
  • Performance Improvements: The framework achieves notable performance gains across several benchmarks, particularly in scientific reasoning tasks such as GPQA, where it offers up to a 46.2% improvement. Such improvements are comparable to those gained through more computationally intensive reinforcement learning approaches, yet AGoT avoids the additional training overhead.
  • Task Versatility: AGoT's design allows it to effectively handle different categories of tasks, including multi-hop retrieval, scientific reasoning, and mathematical problem-solving. The paper reports consistent enhancements in performance across reasoning, retrieval, and explorative task categories when using AGoT with gpt-4o-mini, demonstrating up to an 86.6% improvement in letter accuracy in crossword tasks, and 400% in solving the Game of 24, compared to direct input-output processing.
  • Edge and Node Strategies: The framework is designed to flexibly manage the generation of new nodes per layer and recursive application, leveraging complexity checks to dynamically guide the reasoning process.
  • Scalability: AGoT serves as a scalable and cost-effective alternative to traditional post-training modifications, emphasizing that enhancing the inference process at the level of graph structuring can retain, if not exceed, the benefits of heavy computational retraining methods.

Technical Implementation:

  • AGoT operates through a recursive function defined over thought decomposition and evaluation, as formalized in the paper's algorithm. The integration of this framework is agnostic to the underlying LLM architecture, and therefore, compatible with various models like gpt-4o-mini.
  • The experimental setup reflects a diverse collection of reasoning and retrieval tasks, including challenging datasets such as MoreHopQA and HybridQA, where AGoT demonstrates superior results in logic accuracy metrics like LAAS, and significant enhancements in exploration-intensive tasks such as mini-crosswords and the Game of 24.

The paper ultimately positions AGoT as a forward-looking framework that aligns well with the increased demand for reasoning-enhanced AI solutions. It advocates for the decomposition of cognitive tasks within a graph-oriented data structure as a promising strategy to achieve high-level LLM interactions and improve performance across a wide spectrum of difficult problem-solving scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tushar Pandey (9 papers)
  2. Ara Ghukasyan (4 papers)
  3. Oktay Goktas (7 papers)
  4. Santosh Kumar Radha (32 papers)