Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
48 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs (2407.21358v1)

Published 31 Jul 2024 in cs.AI

Abstract: Knowledge graphs (KGs) complement LLMs by providing reliable, structured, domain-specific, and up-to-date external knowledge. However, KGs and LLMs are often developed separately and must be integrated after training. We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. The algorithm equips a LLM with actions for interfacing a KG and enables the LLM to perform tree search over possible thoughts and actions to find high confidence reasoning paths. We evaluate on two popular benchmark datasets. Our results show that Tree-of-Traversals significantly improves performance on question answering and KG question answering tasks. Code is available at \url{https://github.com/amazon-science/tree-of-traversals}

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Bilal Abu-Salih. 2020. Domain-specific knowledge graphs: A survey. ArXiv, abs/2011.00235.
  2. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. ArXiv, abs/2302.04023.
  3. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Annual Meeting of the Association for Computational Linguistics.
  4. Large-scale simple question answering with memory networks. ArXiv, abs/1506.02075.
  5. Language models are few-shot learners. ArXiv, abs/2005.14165.
  6. Uhop: An unrestricted-hop relation extraction framework for knowledge-based question answering. ArXiv, abs/1904.01246.
  7. Wonjun Choi and Hyunju Lee. 2019. Inference of biomedical relations among chemicals, genes, diseases, and symptoms using knowledge representation learning. IEEE Access, 7:179373–179384.
  8. Nurendra Choudhary and Chandan K. Reddy. 2023. Complex logical reasoning over knowledge graphs using large language models. ArXiv, abs/2305.01157.
  9. Knowledge graph approach to combustion chemistry and interoperability. ACS Omega, 5:18342 – 18348.
  10. Tweac: Transformer with extendable qa agent classifiers. ArXiv, abs/2104.07081.
  11. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. Proceedings of the 14th ACM International Conference on Web Search and Data Mining.
  12. Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics.
  13. Empowering language models with knowledge graph reasoning for open-domain question answering. ArXiv, abs/2211.08380.
  14. Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1 – 38.
  15. Structgpt: A general framework for large language model to reason over structured data. ArXiv, abs/2305.09645.
  16. Active retrieval augmented generation. ArXiv, abs/2305.06983.
  17. Dense passage retrieval for open-domain question answering. In Conference on Empirical Methods in Natural Language Processing.
  18. A survey on complex knowledge base question answering: Methods, challenges and solutions. ArXiv, abs/2105.11644.
  19. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6:167–195.
  20. Retrieval-augmented generation for knowledge-intensive nlp tasks. ArXiv, abs/2005.11401.
  21. Graph reasoning for question answering with triplet retrieval. In Annual Meeting of the Association for Computational Linguistics.
  22. Kagnet: Knowledge-aware graph networks for commonsense reasoning. ArXiv, abs/1909.02151.
  23. Anticipating stock market of the renowned companies: A knowledge graph approach. Complex., 2019:9202457:1–9202457:15.
  24. Knowledge base question answering via encoding of complex query graphs. In Conference on Empirical Methods in Natural Language Processing.
  25. Large language models and knowledge graphs: Opportunities and challenges. ArXiv, abs/2308.06374.
  26. Unifying large language models and knowledge graphs: A roadmap. ArXiv, abs/2306.08302.
  27. Knowledge enhanced contextual word representations. In Conference on Empirical Methods in Natural Language Processing.
  28. Ukp-square v3: A platform for multi-agent qa research. In Annual Meeting of the Association for Computational Linguistics.
  29. Metaqa: Combining expert agents for multi-skill question answering. ArXiv, abs/2112.01922.
  30. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. ArXiv, abs/2307.07697.
  31. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. ArXiv, abs/2107.02137.
  32. Can chatgpt replace traditional kbqa models? an in-depth analysis of the question answering performance of the gpt llm family.
  33. Few-shot in-context learning on knowledge base question answering. In Annual Meeting of the Association for Computational Linguistics.
  34. Qald-10—the 10th challenge on question answering over linked data.
  35. Denny Vrandečić and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57:78–85.
  36. Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering. ArXiv, abs/2308.13259.
  37. Kepler: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176–194.
  38. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903.
  39. Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models. ArXiv, abs/2308.09729.
  40. A survey of question answering over knowledge base. In China Conference on Knowledge Graph and Semantic Computing.
  41. Luke: Deep contextualized entity representations with entity-aware self-attention. In Conference on Empirical Methods in Natural Language Processing.
  42. Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. ArXiv, abs/2306.11489.
  43. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing.
  44. Tree of thoughts: Deliberate problem solving with large language models. ArXiv, abs/2305.10601.
  45. React: Synergizing reasoning and acting in language models. ArXiv, abs/2210.03629.
  46. Deep bidirectional language-knowledge graph pretraining. ArXiv, abs/2210.09338.
  47. Qa-gnn: Reasoning with language models and knowledge graphs for question answering. In North American Chapter of the Association for Computational Linguistics.
  48. Extractive summarization via chatgpt for faithful summary generation. ArXiv, abs/2304.04193.
  49. Greaselm: Graph reasoning enhanced language models for question answering. ArXiv, abs/2201.08860.
  50. Ernie: Enhanced language representation with informative entities. In Annual Meeting of the Association for Computational Linguistics.
  51. Intelligent learning for knowledge graph towards geological data. Sci. Program., 2017:5072427:1–5072427:13.
  52. Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities. ArXiv, abs/2305.13168.
  53. Large language models for information retrieval: A survey. ArXiv, abs/2308.07107.
  54. Neural-symbolic models for logical queries on knowledge graphs. In International Conference on Machine Learning, pages 27454–27478. PMLR.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com