STORYSUMM: Evaluating Faithfulness in Story Summarization (2407.06501v2)
Abstract: Human evaluation has been the gold standard for checking faithfulness in abstractive summarization. However, with a challenging source domain like narrative, multiple annotators can agree a summary is faithful, while missing details that are obvious errors only once pointed out. We therefore introduce a new dataset, STORYSUMM, comprising LLM summaries of short stories with localized faithfulness labels and error explanations. This benchmark is for evaluation methods, testing whether a given method can detect challenging inconsistencies. Using this dataset, we first show that any one human annotation protocol is likely to miss inconsistencies, and we advocate for pursuing a range of methods when establishing ground truth for a summarization dataset. We finally test recent automatic metrics and find that none of them achieve more than 70% balanced accuracy on this task, demonstrating that it is a challenging benchmark for future work in faithfulness evaluation.
- Melanie Subbiah (11 papers)
- Faisal Ladhak (31 papers)
- Akankshya Mishra (3 papers)
- Griffin Adams (14 papers)
- Lydia B. Chilton (26 papers)
- Kathleen McKeown (85 papers)