Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Planning with Learned Entity Prompts for Abstractive Summarization (2104.07606v2)

Published 15 Apr 2021 in cs.CL

Abstract: We introduce a simple but flexible mechanism to learn an intermediate plan to ground the generation of abstractive summaries. Specifically, we prepend (or prompt) target summaries with entity chains -- ordered sequences of entities mentioned in the summary. Transformer-based sequence-to-sequence models are then trained to generate the entity chain and then continue generating the summary conditioned on the entity chain and the input. We experimented with both pretraining and finetuning with this content planning objective. When evaluated on CNN/DailyMail, XSum, SAMSum and BillSum, we demonstrate empirically that the grounded generation with the planning objective improves entity specificity and planning in summaries for all datasets, and achieves state-of-the-art performance on XSum and SAMSum in terms of Rouge. Moreover, we demonstrate empirically that planning with entity chains provides a mechanism to control hallucinations in abstractive summaries. By prompting the decoder with a modified content plan that drops hallucinated entities, we outperform state-of-the-art approaches for faithfulness when evaluated automatically and by humans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shashi Narayan (35 papers)
  2. Yao Zhao (272 papers)
  3. Joshua Maynez (28 papers)
  4. Vitaly Nikolaev (12 papers)
  5. Ryan McDonald (24 papers)
  6. Gonçalo Simoes (2 papers)
Citations (107)

Summary

We haven't generated a summary for this paper yet.