Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation (2104.08704v2)

Published 18 Apr 2021 in cs.CL and cs.AI

Abstract: Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level. However ground-truth references may not be readily available for many free-form text generation applications, and sentence- or document-level detection may fail to provide the fine-grained signals that would prevent fallacious content in real time. As a first step to addressing these issues, we propose a novel token-level, reference-free hallucination detection task and an associated annotated dataset named HaDes (HAllucination DEtection dataSet). To create this dataset, we first perturb a large number of text segments extracted from English language Wikipedia, and then verify these with crowd-sourced annotations. To mitigate label imbalance during annotation, we utilize an iterative model-in-loop strategy. We conduct comprehensive data analyses and create multiple baseline models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianyu Liu (177 papers)
  2. Yizhe Zhang (127 papers)
  3. Chris Brockett (37 papers)
  4. Yi Mao (78 papers)
  5. Zhifang Sui (89 papers)
  6. Weizhu Chen (128 papers)
  7. Bill Dolan (45 papers)
Citations (121)
X Twitter Logo Streamline Icon: https://streamlinehq.com