Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation (2109.10859v1)

Published 22 Sep 2021 in cs.CL and cs.AI

Abstract: Current Machine Translation (MT) systems achieve very good results on a growing variety of language pairs and datasets. However, they are known to produce fluent translation outputs that can contain important meaning errors, thus undermining their reliability in practice. Quality Estimation (QE) is the task of automatically assessing the performance of MT systems at test time. Thus, in order to be useful, QE systems should be able to detect such errors. However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements. In this work, we bridge this gap by proposing a general methodology for adversarial testing of QE for MT. First, we show that despite a high correlation with human judgements achieved by the recent SOTA, certain types of meaning errors are still problematic for QE to detect. Second, we show that on average, the ability of a given model to discriminate between meaning-preserving and meaning-altering perturbations is predictive of its overall performance, thus potentially allowing for comparing QE systems without relying on manual quality annotation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Diptesh Kanojia (58 papers)
  2. Marina Fomicheva (11 papers)
  3. Tharindu Ranasinghe (52 papers)
  4. Frédéric Blain (10 papers)
  5. Lucia Specia (68 papers)
  6. Constantin Orăsan (9 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.