Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimizing Factual Inconsistency and Hallucination in Large Language Models (2311.13878v1)

Published 23 Nov 2023 in cs.CL and cs.AI

Abstract: LLMs are widely used in critical fields such as healthcare, education, and finance due to their remarkable proficiency in various language-related tasks. However, LLMs are prone to generating factually incorrect responses or "hallucinations," which can lead to a loss of credibility and trust among users. To address this issue, we propose a multi-stage framework that generates the rationale first, verifies and refines incorrect ones, and uses them as supporting references to generate the answer. The generated rationale enhances the transparency of the answer and our framework provides insights into how the model arrived at this answer, by using this rationale and the references to the context. In this paper, we demonstrate its effectiveness in improving the quality of responses to drug-related inquiries in the life sciences industry. Our framework improves traditional Retrieval Augmented Generation (RAG) by enabling OpenAI GPT-3.5-turbo to be 14-25% more faithful and 16-22% more accurate on two datasets. Furthermore, fine-tuning samples based on our framework improves the accuracy of smaller open-access LLMs by 33-42% and competes with RAG on commercial models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Muneeswaran I (1 paper)
  2. Shreya Saxena (4 papers)
  3. Siva Prasad (3 papers)
  4. M V Sai Prakash (2 papers)
  5. Advaith Shankar (1 paper)
  6. Varun V (3 papers)
  7. Vishal Vaddina (6 papers)
  8. Saisubramaniam Gopalakrishnan (5 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com