Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-model Back-translated Distillation for Unsupervised Machine Translation (2006.02163v4)

Published 3 Jun 2020 in cs.CL and cs.LG

Abstract: Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, LLMing and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for LLMing provide data diversity to train the UMT systems. However, the gains from these diversification processes has seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT'14 English-French, WMT'16 English-German and English-Romanian bilingual unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It also yields 1.5-3.3 BLEU improvements in IWSLT English-French and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xuan-Phi Nguyen (22 papers)
  2. Shafiq Joty (187 papers)
  3. Thanh-Tung Nguyen (18 papers)
  4. Wu Kui (3 papers)
  5. Ai Ti Aw (18 papers)
Citations (14)