Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation (2308.07286v1)

Published 14 Aug 2023 in cs.CL and cs.LG
The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation

Abstract: Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AutoMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of LLMs and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.

The paper "The Devil is in the Errors: Leveraging LLMs for Fine-grained Machine Translation Evaluation" addresses the gap between traditional scalar quality scores in machine translation (MT) evaluation and more detailed error annotation schemes, specifically targeting Multidimensional Quality Metrics (MQM).

Background and Motivation

Automatic evaluation metrics for MT, such as BLEU, primarily provide single scalar scores to quantify translation quality. While useful, these metrics lack granularity and fail to offer actionable insights into specific types of errors. MQM, on the other hand, offers a fine-grained approach by categorizing individual errors, but its reliance on human annotation is resource-intensive.

Main Contributions

The primary contribution of this paper is the introduction of AutoMQM, a novel technique that leverages the capabilities of LLMs to automatically identify and categorize translation errors. By utilizing the reasoning skills and in-context learning capabilities of LLMs, AutoMQM aims to bridge the gap between simple score-based evaluations and detailed error annotations.

Methodology

  1. Baseline Evaluations: The authors begin by evaluating recent LLMs such as PaLM and PaLM-2 through straightforward score-prediction prompting. This step serves as a baseline to compare against more complex techniques.
  2. In-Context Learning and Fine-tuning: The impact of labeled data is explored through in-context learning and fine-tuning approaches. The models are given examples to learn from and subsequently adjust their evaluations.
  3. Introducing AutoMQM:
    • The authors propose a prompting technique to instruct the LLMs to identify and categorize specific types of errors in translations.
    • This method leverages the LLMs' in-context learning to improve over simple scalar predictions.

Evaluation and Results

The paper presents a comprehensive evaluation of AutoMQM using PaLM-2 models. Key findings include:

  • Improved Performance: AutoMQM shows performance improvements over baseline score-prediction prompting. The gains are more pronounced for larger models, indicating an advantage in scale.
  • Interpretability: One of the significant advantages of AutoMQM is its ability to provide interpretable results in the form of error spans. These annotated spans align closely with human annotations, offering a more transparent evaluation process.

Conclusion

AutoMQM stands out by combining the strength of LLMs in understanding and contextualizing language with the need for fine-grained, interpretable MT evaluation. This method not only enhances performance compared to traditional scoring methods but also bridges the gap to human-like error categorization, potentially reducing the reliance on human annotators.

The paper's findings contribute to advancing the field of machine translation evaluation by proposing an automated, detailed, and interpretable assessment method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Patrick Fernandes (32 papers)
  2. Daniel Deutsch (28 papers)
  3. Mara Finkelstein (13 papers)
  4. Parker Riley (12 papers)
  5. André F. T. Martins (113 papers)
  6. Graham Neubig (342 papers)
  7. Ankush Garg (14 papers)
  8. Jonathan H. Clark (17 papers)
  9. Markus Freitag (49 papers)
  10. Orhan Firat (80 papers)
Citations (54)