Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries (2109.09195v3)

Published 19 Sep 2021 in cs.CL

Abstract: Current pre-trained models applied to summarization are prone to factual inconsistencies which either misrepresent the source text or introduce extraneous information. Thus, comparing the factual consistency of summaries is necessary as we develop improved models. However, the optimal human evaluation setup for factual consistency has not been standardized. To address this issue, we crowdsourced evaluations for factual consistency using the rating-based Likert scale and ranking-based Best-Worst Scaling protocols, on 100 articles from each of the CNN-Daily Mail and XSum datasets over four state-of-the-art models, to determine the most reliable evaluation framework. We find that ranking-based protocols offer a more reliable measure of summary quality across datasets, while the reliability of Likert ratings depends on the target dataset and the evaluation design. Our crowdsourcing templates and summary evaluations will be publicly available to facilitate future research on factual consistency in summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiangru Tang (62 papers)
  2. Alexander Fabbri (11 papers)
  3. Haoran Li (166 papers)
  4. Ziming Mao (14 papers)
  5. Griffin Thomas Adams (1 paper)
  6. Borui Wang (12 papers)
  7. Asli Celikyilmaz (81 papers)
  8. Yashar Mehdad (37 papers)
  9. Dragomir Radev (98 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.