Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Better Evaluation of Instruction-Following: A Case-Study in Summarization (2310.08394v2)

Published 12 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Despite recent advances, evaluating how well LLMs follow user instructions remains an open problem. While evaluation methods of LLMs have seen a rise in prompt-based approaches, limited work on the correctness of these methods has been conducted. In this work, we perform a meta-evaluation of a variety of metrics to quantify how accurately they measure the instruction-following abilities of LLMs. Our investigation is performed on grounded query-based summarization by collecting a new short-form, real-world dataset riSum, containing 300 document-instruction pairs with 3 answers each. All 900 answers are rated by 3 human annotators. Using riSum, we analyze the agreement between evaluation methods and human judgment. Finally, we propose new LLM-based reference-free evaluation methods that improve upon established baselines and perform on par with costly reference-based metrics that require high-quality summaries.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ondrej Skopek (5 papers)
  2. Rahul Aralikatte (24 papers)
  3. Sian Gooding (8 papers)
  4. Victor Carbune (11 papers)
Citations (12)