Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation (2212.07981v2)

Published 15 Dec 2022 in cs.CL

Abstract: Human evaluation is the foundation upon which the evaluation of both summarization systems and automatic metrics rests. However, existing human evaluation studies for summarization either exhibit a low inter-annotator agreement or have insufficient scale, and an in-depth analysis of human evaluation is lacking. Therefore, we address the shortcomings of existing summarization evaluation along the following axes: (1) We propose a modified summarization salience protocol, Atomic Content Units (ACUs), which is based on fine-grained semantic units and allows for a high inter-annotator agreement. (2) We curate the Robust Summarization Evaluation (RoSE) benchmark, a large human evaluation dataset consisting of 22,000 summary-level annotations over 28 top-performing systems on three datasets. (3) We conduct a comparative study of four human evaluation protocols, underscoring potential confounding factors in evaluation setups. (4) We evaluate 50 automatic metrics and their variants using the collected human annotations across evaluation protocols and demonstrate how our benchmark leads to more statistically stable and significant results. The metrics we benchmarked include recent methods based on LLMs, GPTScore and G-Eval. Furthermore, our findings have important implications for evaluating LLMs, as we show that LLMs adjusted by human feedback (e.g., GPT-3.5) may overfit unconstrained human evaluation, which is affected by the annotators' prior, input-agnostic preferences, calling for more robust, targeted evaluation methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yixin Liu (108 papers)
  2. Alexander R. Fabbri (34 papers)
  3. Pengfei Liu (191 papers)
  4. Yilun Zhao (59 papers)
  5. Linyong Nan (17 papers)
  6. Ruilin Han (1 paper)
  7. Simeng Han (20 papers)
  8. Shafiq Joty (187 papers)
  9. Chien-Sheng Wu (77 papers)
  10. Caiming Xiong (337 papers)
  11. Dragomir Radev (98 papers)
Citations (108)