Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation? (2309.07462v2)

Published 14 Sep 2023 in cs.CL

Abstract: LLMs excel in various NLP tasks, yet their evaluation, particularly in languages beyond the top $20$, remains inadequate due to existing benchmarks and metrics limitations. Employing LLMs as evaluators to rank or score other models' outputs emerges as a viable solution, addressing the constraints tied to human annotators and established benchmarks. In this study, we explore the potential of LLM-based evaluators, specifically GPT-4 in enhancing multilingual evaluation by calibrating them against $20$K human judgments across three text-generation tasks, five metrics, and eight languages. Our analysis reveals a bias in GPT4-based evaluators towards higher scores, underscoring the necessity of calibration with native speaker judgments, especially in low-resource and non-Latin script languages, to ensure accurate evaluation of LLM performance across diverse languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rishav Hada (9 papers)
  2. Varun Gumma (14 papers)
  3. Adrian de Wynter (20 papers)
  4. Harshita Diddee (12 papers)
  5. Mohamed Ahmed (11 papers)
  6. Monojit Choudhury (66 papers)
  7. Kalika Bali (27 papers)
  8. Sunayana Sitaram (54 papers)
Citations (44)
X Twitter Logo Streamline Icon: https://streamlinehq.com