Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Understanding and Detecting Hallucinations in Neural Machine Translation via Model Introspection (2301.07779v2)

Published 18 Jan 2023 in cs.CL

Abstract: Neural sequence generation models are known to "hallucinate", by producing outputs that are unrelated to the source text. These hallucinations are potentially harmful, yet it remains unclear in what conditions they arise and how to mitigate their impact. In this work, we first identify internal model symptoms of hallucinations by analyzing the relative token contributions to the generation in contrastive hallucinated vs. non-hallucinated outputs generated via source perturbations. We then show that these symptoms are reliable indicators of natural hallucinations, by using them to design a lightweight hallucination detector which outperforms both model-free baselines and strong classifiers based on quality estimation or large pre-trained models on manually annotated English-Chinese and German-English translation test beds.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Weijia Xu (23 papers)
  2. Sweta Agrawal (35 papers)
  3. Eleftheria Briakou (21 papers)
  4. Marine Carpuat (56 papers)
  5. Marianna J. Martindale (2 papers)
Citations (36)

Summary

We haven't generated a summary for this paper yet.