Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Fact Correction in Abstractive Text Summarization (2010.02443v1)

Published 6 Oct 2020 in cs.CL

Abstract: Pre-trained neural abstractive summarization systems have dominated extractive strategies on news summarization performance, at least in terms of ROUGE. However, system-generated abstractive summaries often face the pitfall of factual inconsistency: generating incorrect facts with respect to the source text. To address this challenge, we propose Span-Fact, a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection. Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text, while retaining the syntactic structure of summaries generated by abstractive summarization models. Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yue Dong (61 papers)
  2. Shuohang Wang (69 papers)
  3. Zhe Gan (135 papers)
  4. Yu Cheng (354 papers)
  5. Jackie Chi Kit Cheung (57 papers)
  6. Jingjing Liu (139 papers)
Citations (116)