Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Machine Translation Quality and Post-Editing Performance (2109.05016v1)

Published 10 Sep 2021 in cs.CL and cs.HC

Abstract: We test the natural expectation that using MT in professional translation saves human processing time. The last such study was carried out by Sanchez-Torron and Koehn (2016) with phrase-based MT, artificially reducing the translation quality. In contrast, we focus on neural MT (NMT) of high quality, which has become the state-of-the-art approach since then and also got adopted by most translation companies. Through an experimental study involving over 30 professional translators for English -> Czech translation, we examine the relationship between NMT performance and post-editing time and quality. Across all models, we found that better MT systems indeed lead to fewer changes in the sentences in this industry setting. The relation between system quality and post-editing time is however not straightforward and, contrary to the results on phrase-based MT, BLEU is definitely not a stable predictor of the time or final output quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vilém Zouhar (41 papers)
  2. Aleš Tamchyna (3 papers)
  3. Martin Popel (14 papers)
  4. Ondřej Bojar (91 papers)
Citations (17)
Youtube Logo Streamline Icon: https://streamlinehq.com