Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hallucinations in Large Multilingual Translation Models (2303.16104v1)

Published 28 Mar 2023 in cs.CL

Abstract: Large-scale multilingual machine translation systems have demonstrated remarkable ability to translate directly between numerous languages, making them increasingly appealing for real-world applications. However, when deployed in the wild, these models may generate hallucinated translations which have the potential to severely undermine user trust and raise safety concerns. Existing research on hallucinations has primarily focused on small bilingual models trained on high-resource languages, leaving a gap in our understanding of hallucinations in massively multilingual models across diverse translation scenarios. In this work, we fill this gap by conducting a comprehensive analysis on both the M2M family of conventional neural machine translation models and ChatGPT, a general-purpose LLM~(LLM) that can be prompted for translation. Our investigation covers a broad spectrum of conditions, spanning over 100 translation directions across various resource levels and going beyond English-centric language pairs. We provide key insights regarding the prevalence, properties, and mitigation of hallucinations, paving the way towards more responsible and reliable machine translation systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nuno M. Guerreiro (27 papers)
  2. Duarte Alves (2 papers)
  3. Jonas Waldendorf (1 paper)
  4. Barry Haddow (59 papers)
  5. Alexandra Birch (67 papers)
  6. Pierre Colombo (48 papers)
  7. André F. T. Martins (113 papers)
Citations (115)

Summary

We haven't generated a summary for this paper yet.