Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Exploring the Reasoning Capability of Large Language Models with Knowledge Graphs (2312.00353v1)

Published 1 Dec 2023 in cs.CL and cs.AI

Abstract: This paper examines the capacity of LLMs to reason with knowledge graphs using their internal knowledge graph, i.e., the knowledge graph they learned during pre-training. Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs and their ability to infer knowledge graph relations from context. To address these questions, we employ LLMs to perform four distinct knowledge graph reasoning tasks. Furthermore, we identify two types of hallucinations that may occur during knowledge reasoning with LLMs: content and ontology hallucination. Our experimental results demonstrate that LLMs can successfully tackle both simple and complex knowledge graph reasoning tasks from their own memory, as well as infer from input context.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pei-Chi Lo (3 papers)
  2. Yi-Hang Tsai (3 papers)
  3. Ee-Peng Lim (57 papers)
  4. San-Yih Hwang (3 papers)
Citations (1)