Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation (2005.04816v1)

Published 11 May 2020 in cs.CL and cs.LG

Abstract: Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with self-supervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data. In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT. We offer three major results: (i) Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models. (ii) Self-supervision improves zero-shot translation quality in multilingual models. (iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on ro-en translation without any parallel data or back-translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Aditya Siddhant (22 papers)
  2. Ankur Bapna (53 papers)
  3. Yuan Cao (201 papers)
  4. Orhan Firat (80 papers)
  5. Mia Chen (6 papers)
  6. Sneha Kudugunta (14 papers)
  7. Naveen Arivazhagan (15 papers)
  8. Yonghui Wu (115 papers)
Citations (86)