Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation? (2309.07462v2)
Abstract: LLMs excel in various NLP tasks, yet their evaluation, particularly in languages beyond the top $20$, remains inadequate due to existing benchmarks and metrics limitations. Employing LLMs as evaluators to rank or score other models' outputs emerges as a viable solution, addressing the constraints tied to human annotators and established benchmarks. In this study, we explore the potential of LLM-based evaluators, specifically GPT-4 in enhancing multilingual evaluation by calibrating them against $20$K human judgments across three text-generation tasks, five metrics, and eight languages. Our analysis reveals a bias in GPT4-based evaluators towards higher scores, underscoring the necessity of calibration with native speaker judgments, especially in low-resource and non-Latin script languages, to ensure accurate evaluation of LLM performance across diverse languages.
- Rishav Hada (9 papers)
- Varun Gumma (14 papers)
- Adrian de Wynter (20 papers)
- Harshita Diddee (12 papers)
- Mohamed Ahmed (11 papers)
- Monojit Choudhury (66 papers)
- Kalika Bali (27 papers)
- Sunayana Sitaram (54 papers)