Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Training for Unsupervised Neural Machine Translation in Unbalanced Training Data Scenarios (2004.04507v2)

Published 9 Apr 2020 in cs.CL

Abstract: Unsupervised neural machine translation (UNMT) that relies solely on massive monolingual corpora has achieved remarkable results in several translation tasks. However, in real-world scenarios, massive monolingual corpora do not exist for some extremely low-resource languages such as Estonian, and UNMT systems usually perform poorly when there is not adequate training corpus for one language. In this paper, we first define and analyze the unbalanced training data scenario for UNMT. Based on this scenario, we propose UNMT self-training mechanisms to train a robust UNMT system and improve its performance in this case. Experimental results on several language pairs show that the proposed methods substantially outperform conventional UNMT systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haipeng Sun (11 papers)
  2. Rui Wang (996 papers)
  3. Kehai Chen (59 papers)
  4. Masao Utiyama (39 papers)
  5. Eiichiro Sumita (31 papers)
  6. Tiejun Zhao (70 papers)
Citations (14)