Papers
Topics
Authors
Recent
2000 character limit reached

Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation

Published 21 Aug 2018 in cs.CL | (1808.07048v1)

Abstract: Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese--English news translation task. We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents. In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences. Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs.

Citations (243)

Summary

  • The paper challenges claims of human parity in neural machine translation by employing document-level evaluations instead of isolated sentence assessments.
  • It applies a 2Ă—2 mixed factorial design with professional translators and pairwise ranking to analyze Chinese–English news translations.
  • Findings reveal a significant preference for human translations, highlighting context-based coherence issues and the need for discourse-aware systems.

Document-Level Evaluation in Machine Translation: Reassessing Human Parity

The paper, "Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation," rigorously investigates the claim that neural machine translation (NMT) can achieve parity with professional human translation, particularly in the context of the WMT Chinese--English news translation task. The research conducted by Samuel Läubli, Rico Sennrich, and Martin Volk, focuses on assessing the validity of these claims using a different evaluation protocol emphasizing document-level context over isolated sentences.

Context and Motivation

The field of machine translation (MT) has seen remarkable advancements with neural approaches outperforming earlier latent-variable and phrase-based models. Claims regarding NMT reaching human parity have emerged, elated by evidence like that provided by Hassan et al. (2018), where machine translations supposedly match the quality of professional human translations on certain tasks. This paper challenges the robustness of these claims, suggesting that existing evaluation methods, largely conducted at the sentence level, might obscure significant quality differentiations that are more apparent at the document level.

Methodological Approach

The authors devise an empirical study structured around a 2\,Ă—\,2 mixed factorial design, incorporating two key factors: (1) source text availability categorized into adequacy (source and translation available) and fluency (only translations available), and (2) experimental unit delineated by isolated sentence or entire document evaluations. Importantly, the experiment recruits professional translators instead of crowd workers, aiming to leverage nuanced expertise, and employs pairwise ranking rather than absolute rating, to account for relative differences in quality more effectively.

Key Findings

The document-level evaluations revealed distinct tendencies; in contrast to sentence-level analysis where human and MT parity was largely equivocal, document-level evaluations showed a statistically significant preference for human translations. This divergence underscores potential inadequacies in existing evaluation methodologies that may overlook context-dependent translation deficiencies such as lexical consistency, coherence, and discourse-level errors within machine-generated text. Notably, the preference for human translation remained pronounced even among assessments focused solely on translation fluency—a dimension previously thought to be a strength of neural systems.

Implications and Future Directions

The notable emphasis on document-level evaluation has substantial implications for both theoretical inquiry and practical applications. As machine translation continues to evolve, the necessity for evaluation protocols that reflect the complexities of cohesive, multi-sentence discourse becomes increasingly crucial. The research illuminates the potential pitfalls of equating sentence-level parity with holistic equivalence to human translation capabilities across varied contexts.

Prospective advancements might include the development of systems that natively incorporate discourse-aware translation mechanisms and the cultivation of document-level training datasets to bridge observed gaps. Additionally, refining evaluation metrics to incorporate these dimensions, perhaps through enhanced automated metrics aligned closer with human judgment, would be a natural extension of this work.

In conclusion, while the investigation does not refute the impressive capabilities of modern NMT systems, it advocates for a recalibrated perspective on evaluation practices, ensuring that assessments are as comprehensive and contextually aware as the translation tasks themselves. This step is essential for both accurately gauging the progress of machine translation technology and guiding future innovations.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.