Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DelucionQA: Detecting Hallucinations in Domain-specific Question Answering (2312.05200v1)

Published 8 Dec 2023 in cs.CL

Abstract: Hallucination is a well-known phenomenon in text generated by LLMs. The existence of hallucinatory responses is found in almost all application scenarios e.g., summarization, question-answering (QA) etc. For applications requiring high reliability (e.g., customer-facing assistants), the potential existence of hallucination in LLM-generated text is a critical problem. The amount of hallucination can be reduced by leveraging information retrieval to provide relevant background information to the LLM. However, LLMs can still generate hallucinatory content for various reasons (e.g., prioritizing its parametric knowledge over the context, failure to capture the relevant information from the context, etc.). Detecting hallucinations through automated methods is thus paramount. To facilitate research in this direction, we introduce a sophisticated dataset, DelucionQA, that captures hallucinations made by retrieval-augmented LLMs for a domain-specific QA task. Furthermore, we propose a set of hallucination detection methods to serve as baselines for future works from the research community. Analysis and case study are also provided to share valuable insights on hallucination phenomena in the target scenario.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Mobashir Sadat (7 papers)
  2. Zhengyu Zhou (3 papers)
  3. Lukas Lange (31 papers)
  4. Jun Araki (11 papers)
  5. Arsalan Gundroo (1 paper)
  6. Bingqing Wang (6 papers)
  7. Rakesh R Menon (24 papers)
  8. Md Rizwan Parvez (24 papers)
  9. Zhe Feng (53 papers)
Citations (23)