Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Factual Consistency of Abstractive Summarization via Question Answering (2105.04623v1)

Published 10 May 2021 in cs.CL and cs.AI

Abstract: A commonly observed problem with the state-of-the art abstractive summarization models is that the generated summaries can be factually inconsistent with the input documents. The fact that automatic summarization may produce plausible-sounding yet inaccurate summaries is a major concern that limits its wide application. In this paper we present an approach to address factual consistency in summarization. We first propose an efficient automatic evaluation metric to measure factual consistency; next, we propose a novel learning algorithm that maximizes the proposed metric during model training. Through extensive experiments, we confirm that our method is effective in improving factual consistency and even overall quality of the summaries, as judged by both automatic metrics and human evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Feng Nan (22 papers)
  2. Cicero Nogueira dos Santos (31 papers)
  3. Henghui Zhu (24 papers)
  4. Patrick Ng (29 papers)
  5. Kathleen McKeown (85 papers)
  6. Ramesh Nallapati (38 papers)
  7. Dejiao Zhang (20 papers)
  8. Zhiguo Wang (100 papers)
  9. Andrew O. Arnold (9 papers)
  10. Bing Xiang (74 papers)
Citations (80)