Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating the IWSLT2023 Speech Translation Tasks: Human Annotations, Automatic Metrics, and Segmentation (2406.03881v1)

Published 6 Jun 2024 in cs.CL

Abstract: Human evaluation is a critical component in machine translation system development and has received much attention in text translation research. However, little prior work exists on the topic of human evaluation for speech translation, which adds additional challenges such as noisy data and segmentation mismatches. We take first steps to fill this gap by conducting a comprehensive human evaluation of the results of several shared tasks from the last International Workshop on Spoken Language Translation (IWSLT 2023). We propose an effective evaluation strategy based on automatic resegmentation and direct assessment with segment context. Our analysis revealed that: 1) the proposed evaluation strategy is robust and scores well-correlated with other types of human judgements; 2) automatic metrics are usually, but not always, well-correlated with direct assessment scores; and 3) COMET as a slightly stronger automatic metric than chrF, despite the segmentation noise introduced by the resegmentation step systems. We release the collected human-annotated data in order to encourage further investigation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Matthias Sperber (24 papers)
  2. Ondřej Bojar (91 papers)
  3. Barry Haddow (59 papers)
  4. Dávid Javorský (7 papers)
  5. Xutai Ma (23 papers)
  6. Matteo Negri (93 papers)
  7. Jan Niehues (76 papers)
  8. Peter Polák (11 papers)
  9. Elizabeth Salesky (27 papers)
  10. Katsuhito Sudoh (35 papers)
  11. Marco Turchi (51 papers)
Citations (2)