Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Language Model-Driven Unsupervised Neural Machine Translation (1911.03937v1)

Published 10 Nov 2019 in cs.CL

Abstract: Unsupervised neural machine translation(NMT) is associated with noise and errors in synthetic data when executing vanilla back-translations. Here, we explicitly exploits LLM(LM) to drive construction of an unsupervised NMT system. This features two steps. First, we initialize NMT models using synthetic data generated via temporary statistical machine translation(SMT). Second, unlike vanilla back-translation, we formulate a weight function, that scores synthetic data at each step of subsequent iterative training; this allows unsupervised training to an improved outcome. We present the detailed mathematical construction of our method. Experimental WMT2014 English-French, and WMT2016 English-German and English-Russian translation tasks revealed that our method outperforms the best prior systems by more than 3 BLEU points.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Zhang (1489 papers)
  2. Youyuan Lin (1 paper)
  3. Ruoran Ren (1 paper)
  4. Xiaodong Wang (228 papers)
  5. Zhenshuang Liang (2 papers)
  6. Zhen Huang (114 papers)