Emergent Mind

Graph of Thoughts: Solving Elaborate Problems with Large Language Models

(2308.09687)
Published Aug 18, 2023 in cs.CL , cs.AI , and cs.LG

Abstract

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in LLMs beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.

Comparison between Graph of Thoughts and other prompting strategies in research.

Overview

  • Graph of Thoughts (GoT) is introduced as a new paradigm for improving prompt engineering in LLMs, transforming reasoning into a graph structure.

  • GoT framework includes thought operations such as aggregation, refinement, and generation, and introduces thought scoring and ranking for better problem-solving.

  • The graph-based reasoning approach offers advantages in terms of alignment with human cognitive processes, showing superior performance in tasks compared to traditional methods.

  • GoT's introduction signals a move towards more sophisticated prompting techniques, with potential implications for research in artificial intelligence and graph theory.

Advancing Prompt Engineering with Graph of Thoughts (GoT)

Introduction

Prompt engineering has recently emerged as a critical approach for leveraging the power of LLMs efficiently, without necessitating any modifications to the model itself. This technique relies on crafting input prompts in a manner that effectively communicates the task to the model, enabling it to generate useful outputs. Despite its potential, the process of designing effective prompts remains a significant challenge. Addressing this issue, we introduce Graph of Thoughts (GoT), a new paradigm designed to enhance an LLM's problem-solving capability through a graph-based representation of reasoning processes.

The GoT Framework

At its core, GoT transforms the LLM reasoning process into a graph structure, where vertices represent individual thoughts or intermediate steps towards solving a problem, and edges represent dependencies or logical flows between these thoughts. This representation allows for complex thought interactions beyond linear or tree-based reasoning patterns, offering a more nuanced and flexible approach to problem-solving.

Key innovations include operations for thought transformations such as aggregation, refinement, and generation, each tailored to leverage the graph structure for enhanced problem-solving. Aggregation, for instance, combines multiple thoughts to synthesize a unified outcome, aiming to distill synergies from varied reasoning paths. The framework also introduces mechanisms for thought scoring and ranking, enabling the selection of the most promising solutions from a pool of generated thoughts.

Practical Implications and Theoretical Insights

GoT's graph-based approach not only increases the efficacy of prompt engineering but also aligns closely with cognitive processes such as human reasoning and the complex networks seen in brain structures. Our evaluations demonstrate GoT's superiority in various tasks, including sorting and set operations, where it outperforms existing methods like ToT in both accuracy and efficiency.

Furthermore, we introduce a novel metric for evaluating prompting strategies: the volume of a thought. This metric aims to quantify the breadth of information a thought encapsulates, offering a new lens through which to assess prompting strategies. GoT distinguishes itself with its ability to ensure low latency in reaching final thoughts while maintaining a high volume of contributing thoughts, an attribute not paralleled in existing approaches.

Future Directions

The introduction of GoT paves the way for more sophisticated prompting techniques that more closely mimic complex human thought processes. Its success in leveraging graph structures invites further exploration into other areas where graph theory can intersect with artificial intelligence, potentially leading to breakthroughs in how we understand and enhance machine reasoning.

Conclusion

Graph of Thoughts represents a significant advancement in prompt engineering for LLMs, offering a novel graph-based framework that encapsulates complex reasoning in a manner akin to human thought processes. Through its flexibility, efficiency, and alignment with cognitive structures, GoT sets a new standard for solving elaborate problems and opens new avenues for research at the intersection of artificial intelligence and graph theory.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube
References
  1. Practice of Streaming Processing of Dynamic Graphs: Concepts, Models, and Systems. IEEE Transactions on Parallel and Distributed Systems, 34(6): 1860–1876.
  2. GDI: A Graph Database Interface Standard. https://github.com/spcl/GDI-RMA. Accessed: 2023-09-05.

  3. The Graph Database Interface: Scaling Online Transactional and Analytical Graph Workloads to Hundreds of Thousands of Cores. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’23. ACM.
  4. Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries. ACM Comput. Surv., 56(2).
  5. Motif Prediction with Graph Neural Networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’22, 35–45.
  6. Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis
  7. Neural Graph Databases. In Proceedings of the First Learning on Graphs Conference, volume 198 of Proceedings of Machine Learning Research, 31:1–31:38. PMLR.
  8. SISA: Set-Centric Instruction Set Architecture for Graph Mining on Processing-in-Memory Systems. In Proceedings of the 54th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO ’21, 282–297.
  9. Communication-Efficient Jaccard Similarity for High-Performance Distributed Genome Comparisons. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium, IPDPS ’20, 1122–1132.
  10. ProbGraph: High-Performance and High-Accuracy Graph Mining with Probabilistic Set Representations. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’22. IEEE.
  11. GraphMineSuite: Enabling High-Performance and Programmable Graph Mining Algorithms with Set Algebra. Proc. VLDB Endow., 14(11): 1922–1935.
  12. Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4): 18–42.
  13. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems (NeurIPS ’20), volume 33, 1877–1901. Curran Associates.
  14. Sparks of Artificial General Intelligence: Early experiments with GPT-4
  15. Graph Mining: Laws, Generators, and Algorithms. ACM Comput. Surv., 38(1).
  16. Machine Learning on Graphs: A Model and Comprehensive Taxonomy
  17. Teaching Large Language Models to Self-Debug
  18. Fast Graph Pattern Matching. In Proceedings of the IEEE 24th International Conference on Data Engineering, ICDE ’08, 913–922.
  19. PaLM: Scaling Language Modeling with Pathways
  20. Mining Graph Data. John Wiley & Sons.
  21. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning
  22. Low-Latency Graph Streaming Using Compressed Purely-Functional Trees. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’19, 918–934.
  23. Language Model Cascades. In Beyond Bayes: Paths Towards Universal Reasoning Systems, Workshop at ICML ’22.
  24. A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences, 119(32): e2123433119.
  25. Graph Pattern Matching: From Intractable to Polynomial Time. Proc. VLDB Endow., 3(1–2): 264–275.
  26. DISTINGER: A distributed graph data structure for massive dynamic graph processing. In Proccedings of the IEEE International Conference on Big Data, Big Data ’15, 1814–1822.
  27. Triangles to Capture Social Cohesion. In Proceedings of the IEEE Third International Conference on Privacy, Security, Risk and Trust and IEEE Third International Conference on Social Computing, PASSAT/SocialCom ’11, 258–265.
  28. Friston, K. 2008. Hierarchical Models in the Brain. PLOS Computational Biology, 4(11): 1–24.
  29. Complexity-Based Prompting for Multi-Step Reasoning
  30. Learning Combinatorial Node Labeling Algorithms
  31. Lifting Sequential Graph Algorithms for Distributed-Memory Parallel Computation. SIGPLAN Not., 40(10): 423–437.
  32. The Parallel BGL: A generic library for distributed graph computations. Parallel Object-Oriented Scientific Computing (POOSC).
  33. Representation Learning on Graphs: Methods and Applications. Bulletin of the Technical Committee on Data Engineering, 40(3): 52–74.
  34. A survey on improving NLP models with human explanations. In Proceedings of the First Workshop on Learning with Natural Language Supervision, 40–47. Association for Computational Linguistics.
  35. Cyclic Pattern Kernels for Predictive Graph Mining. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’04, 158–167.
  36. Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, 9118–9147. PMLR.
  37. Inner Monologue: Embodied Reasoning through Planning with Language Models
  38. A survey of frequent subgraph mining algorithms. The Knowledge Engineering Review, 28(1): 75–105.
  39. Language Models can Solve Computer Tasks
  40. Explanation-Based Human Debugging of NLP Models: A Survey. Transactions of the Association for Computational Linguistics, 9: 1508–1528.
  41. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’21, 3045–3059. Association for Computational Linguistics.
  42. Prefix-Tuning: Optimizing Continuous Prompts for Generation
  43. Large Language Model Guided Tree-of-Thought
  44. Challenges in Parallel Graph Processing. Parallel Processing Letters, 17(1): 5–20.
  45. Self-Refine: Iterative Refinement with Self-Feedback
  46. Pregel: A System for Large-Scale Graph Processing. In Proceedings of the International Conference on Management of Data, SIGMOD ’10, 135–146. ACM.
  47. Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation
  48. Show Your Work: Scratchpads for Intermediate Computation with Language Models
  49. REFINER: Reasoning Feedback on Intermediate Representations
  50. Shaping Communities out of Triangles. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM ’12, 1677–1681.
  51. Reasoning with Language Model Prompting: A Survey. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, ACL ’23, 5368–5393. Association for Computational Linguistics.
  52. qrdlgit. 2023. graph-of-thoughts Repository. https://github.com/qrdlgit/graph-of-thoughts. Accessed: 2023-10-11.

  53. Improving Language Understanding by Generative Pre-Training. https://openai.com/research/language-unsupervised. Accessed: 2023-09-06.

  54. Language Models are Unsupervised Multitask Learners. https://openai.com/research/better-language-models. Accessed: 2023-09-06.

  55. Graph Databases: New Opportunities for Connected Data. O’Reilly Media, 2nd edition.
  56. The Future is Big Graphs: A Community View on Graph Processing Systems. Commun. ACM, 64(9): 62–71.
  57. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1): 61–80.
  58. Schaeffer, S. E. 2007. Graph clustering. Computer Science Review, 1(1): 27–64.
  59. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
  60. Reflexion: Language Agents with Verbal Reinforcement Learning
  61. Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
  62. Arabesque: A System for Distributed Graph Mining. In Proceedings of the 25th Symposium on Operating Systems Principles, SOSP ’15, 425–440. ACM.
  63. LLaMA: Open and Efficient Foundation Language Models
  64. Llama 2: Open Foundation and Fine-Tuned Chat Models
  65. Attention is All you Need. In Advances in Neural Information Processing Systems (NIPS ’17), volume 30. Curran Associates.
  66. Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, ACL ’23, 2609–2634. Association for Computational Linguistics.
  67. Self-Consistency Improves Chain of Thought Reasoning in Language Models. In Proceedings of the Eleventh International Conference on Learning Representations, ICLR ’23.
  68. Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents
  69. Interactive Natural Language Processing
  70. Putting Humans in the Natural Language Processing Loop: A Survey. In Proceedings of the First Workshop on Bridging Human-Computer Interaction and Natural Language Processing, 47–52. Association for Computational Linguistics.
  71. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
  72. PromptChainer: Chaining Large Language Model Prompts through Visual Programming. In Extended Abstracts of the Conference on Human Factors in Computing Systems, CHI EA ’22. ACM.
  73. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Proceedings of the Conference on Human Factors in Computing Systems, CHI ’22. ACM.
  74. A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1): 4–24.
  75. Self-Evaluation Guided Beam Search for Reasoning
  76. Foundation Models for Decision Making: Problems, Methods, and Opportunities
  77. Tree of Thoughts: Deliberate Problem Solving with Large Language Models
  78. ReAct: Synergizing Reasoning and Acting in Language Models. In Proceedings of the Eleventh International Conference on Learning Representations, ICLR ’23.
  79. Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Language Models
  80. STaR: Bootstrapping Reasoning With Reasoning. In Advances in Neural Information Processing Systems (NeurIPS ’22), volume 35, 15476–15488. Curran Associates.
  81. Planning with Large Language Models for Code Generation. In Proceedings of the Eleventh International Conference on Learning Representations, ICLR ’23.
  82. Deep Learning on Graphs: A Survey. IEEE Transactions on Knowledge and Data Engineering, 34(1): 249–270.
  83. Graph neural networks: A review of methods and applications. AI Open, 1: 57–81.
  84. Large Language Models Are Human-Level Prompt Engineers

Show All 84