Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Set of Recommendations for Assessing Human-Machine Parity in Language Translation (2004.01694v1)

Published 3 Apr 2020 in cs.CL and cs.AI

Abstract: The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human-machine parity was owed to weaknesses in the evaluation design - which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human-machine parity in particular, for which we offer a set of recommendations based on our empirical findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Samuel Läubli (9 papers)
  2. Sheila Castilho (6 papers)
  3. Graham Neubig (342 papers)
  4. Rico Sennrich (87 papers)
  5. Qinlan Shen (6 papers)
  6. Antonio Toral (35 papers)
Citations (90)