Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting (2311.13314v1)

Published 22 Nov 2023 in cs.CL

Abstract: Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of LLMs. Existing methods usually only use the user's input to query the knowledge graph, thus failing to address the factual hallucination generated by LLMs during its reasoning process. To address this problem, this paper proposes Knowledge Graph-based Retrofitting (KGR), a new framework that incorporates LLMs with KGs to mitigate factual hallucination during the reasoning process by retrofitting the initial draft responses of LLMs based on the factual knowledge stored in KGs. Specifically, KGR leverages LLMs to extract, select, validate, and retrofit factual statements within the model-generated responses, which enables an autonomous knowledge verifying and refining procedure without any additional manual efforts. Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning processes, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination and enhancing the reliability of LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xinyan Guan (10 papers)
  2. Yanjiang Liu (8 papers)
  3. Hongyu Lin (94 papers)
  4. Yaojie Lu (61 papers)
  5. Ben He (37 papers)
  6. Xianpei Han (103 papers)
  7. Le Sun (111 papers)
Citations (39)