Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Eval4NLP 2023 Shared Task on Prompting Large Language Models as Explainable Metrics (2310.19792v1)

Published 30 Oct 2023 in cs.CL

Abstract: With an increasing number of parameters and pre-training data, generative LLMs have shown remarkable capabilities to solve tasks with minimal or no task-related examples. Notably, LLMs have been successfully employed as evaluation metrics in text generation tasks. Within this context, we introduce the Eval4NLP 2023 shared task that asks participants to explore prompting and score extraction for machine translation (MT) and summarization evaluation. Specifically, we propose a novel competition setting in which we select a list of allowed LLMs and disallow fine-tuning to ensure a focus on prompting. We present an overview of participants' approaches and evaluate them on a new reference-free test set spanning three language pairs for MT and a summarization dataset. Notably, despite the task's restrictions, the best-performing systems achieve results on par with or even surpassing recent reference-free metrics developed using larger models, including GEMBA and Comet-Kiwi-XXL. Finally, as a separate track, we perform a small-scale human evaluation of the plausibility of explanations given by the LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Christoph Leiter (13 papers)
  2. Juri Opitz (30 papers)
  3. Daniel Deutsch (28 papers)
  4. Yang Gao (761 papers)
  5. Rotem Dror (14 papers)
  6. Steffen Eger (90 papers)
Citations (25)