Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Source Neural Machine Translation with Data Augmentation (1810.06826v2)

Published 16 Oct 2018 in cs.CL

Abstract: Multi-source translation systems translate from multiple languages to a single target language. By using information from these multiple sources, these systems achieve large gains in accuracy. To train these systems, it is necessary to have corpora with parallel text in multiple sources and the target language. However, these corpora are rarely complete in practice due to the difficulty of providing human translations in all of the relevant languages. In this paper, we propose a data augmentation approach to fill such incomplete parts using multi-source neural machine translation (NMT). In our experiments, results varied over different language combinations but significant gains were observed when using a source language similar to the target language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuta Nishimura (6 papers)
  2. Katsuhito Sudoh (35 papers)
  3. Graham Neubig (342 papers)
  4. Satoshi Nakamura (94 papers)
Citations (19)