Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Structure Guided Prompt: Instructing Large Language Model in Multi-Step Reasoning by Exploring Graph Structure of the Text (2402.13415v1)

Published 20 Feb 2024 in cs.CL

Abstract: Although LLMs excel at addressing straightforward reasoning tasks, they frequently struggle with difficulties when confronted by more complex multi-step reasoning due to a range of factors. Firstly, natural language often encompasses complex relationships among entities, making it challenging to maintain a clear reasoning chain over longer spans. Secondly, the abundance of linguistic diversity means that the same entities and relationships can be expressed using different terminologies and structures, complicating the task of identifying and establishing connections between multiple pieces of information. Graphs provide an effective solution to represent data rich in relational information and capture long-term dependencies among entities. To harness the potential of graphs, our paper introduces Structure Guided Prompt, an innovative three-stage task-agnostic prompting framework designed to improve the multi-step reasoning capabilities of LLMs in a zero-shot setting. This framework explicitly converts unstructured text into a graph via LLMs and instructs them to navigate this graph using task-specific strategies to formulate responses. By effectively organizing information and guiding navigation, it enables LLMs to provide more accurate and context-aware responses. Our experiments show that this framework significantly enhances the reasoning capabilities of LLMs, enabling them to excel in a broader spectrum of natural language scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687 (2023).
  2. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  3. Rudolf Carnap. 2012. Introduction to symbolic logic and its applications. Courier Corporation.
  4. A review: Knowledge reasoning over knowledge graph. Expert Systems with Applications 141 (2020), 112948.
  5. Selection-inference: Exploiting large language models for interpretable logical reasoning. arXiv preprint arXiv:2205.09712 (2022).
  6. Explaining answers with entailment trees. arXiv preprint arXiv:2104.08661 (2021).
  7. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning. In International Conference on Learning Representations (ICLR).
  8. Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022).
  9. Embedding Logical Queries on Knowledge Graphs. In Advances in Neural Information Processing Systems (NeurIPS). 2030–2041.
  10. Lambada: Backward chaining for automated reasoning in natural language. arXiv preprint arXiv:2212.13894 (2022).
  11. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.
  12. Soochan Lee and Gunhee Kim. 2023. Recursion of Thought: A Divide-and-Conquer Approach to Multi-Context Reasoning with Language Models. arXiv preprint arXiv:2306.06891 (2023).
  13. Multi-hop knowledge graph reasoning with reward shaping. arXiv preprint arXiv:1808.10568 (2018).
  14. Recent advances in natural language processing via large pre-trained language models: A survey. Comput. Surveys 56, 2 (2023), 1–40.
  15. OpenAI. 2023. GPT-4 Technical Report. ArXiv abs/2303.08774 (2023).
  16. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering (2024).
  17. ART: Automatic multi-step reasoning and tool-use for large language models. arXiv:2303.09014 [cs.CL]
  18. J. Ross Quinlan. 1990. Learning logical definitions from relations. Machine learning 5, 3 (1990), 239–266.
  19. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings. In International Conference on Learning Representations (ICLR). OpenReview.net.
  20. Hongyu Ren and Jure Leskovec. 2020. Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. In Advances in Neural Information Processing Systems (NeurIPS).
  21. Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv preprint arXiv:2210.01240 (2022).
  22. M-walk: Learning to walk over graphs using monte carlo tree search. Advances in Neural Information Processing Systems (NeurIPS) 31 (2018).
  23. CLUTRR: A diagnostic benchmark for inductive reasoning from text. arXiv preprint arXiv:1908.06177 (2019).
  24. Steven A Sloman. 1996. The empirical case for two systems of reasoning. Psychological bulletin 119, 1 (1996), 3.
  25. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 (2022).
  26. Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them. arXiv preprint arXiv:2210.09261 (2022).
  27. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432 (2023).
  28. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
  29. Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models. arXiv preprint arXiv:2308.09729 (2023).
  30. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Martha Palmer, Rebecca Hwa, and Sebastian Riedel (Eds.). Association for Computational Linguistics, 564–573. https://doi.org/10.18653/v1/d17-1060
  31. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018).
  32. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601 (2023).
  33. Neural, symbolic and neural-symbolic reasoning on knowledge graphs. AI Open 2 (2021), 14–35.
  34. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
  35. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 (2022).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kewei Cheng (8 papers)
  2. Nesreen K. Ahmed (76 papers)
  3. Theodore Willke (6 papers)
  4. Yizhou Sun (149 papers)
Citations (4)