Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Finding Replicable Human Evaluations via Stable Ranking Probability (2404.01474v1)

Published 1 Apr 2024 in cs.CL

Abstract: Reliable human evaluation is critical to the development of successful natural language generation models, but achieving it is notoriously difficult. Stability is a crucial requirement when ranking systems by quality: consistent ranking of systems across repeated evaluations is not just desirable, but essential. Without it, there is no reliable foundation for hill-climbing or product launch decisions. In this paper, we use machine translation and its state-of-the-art human evaluation framework, MQM, as a case study to understand how to set up reliable human evaluations that yield stable conclusions. We investigate the optimal configurations for item allocation to raters, number of ratings per item, and score normalization. Our study on two language pairs provides concrete recommendations for designing replicable human evaluation studies. We also collect and release the largest publicly available dataset of multi-segment translations rated by multiple professional translators, consisting of nearly 140,000 segment annotations across two language pairs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Parker Riley (12 papers)
  2. Daniel Deutsch (28 papers)
  3. George Foster (24 papers)
  4. Viresh Ratnakar (4 papers)
  5. Ali Dabirmoghaddam (3 papers)
  6. Markus Freitag (49 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets