Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization (2110.08168v2)

Published 15 Oct 2021 in cs.CL

Abstract: Transformer-based models have achieved state-of-the-art performance on short-input summarization. However, they still struggle with summarizing longer text. In this paper, we present DYLE, a novel dynamic latent extraction approach for abstractive long-input summarization. DYLE jointly trains an extractor and a generator and treats the extracted text snippets as the latent variable, allowing dynamic snippet-level attention weights during decoding. To provide adequate supervision, we propose simple yet effective heuristics for oracle extraction as well as a consistency loss term, which encourages the extractor to approximate the averaged dynamic weights predicted by the generator. We evaluate our method on different long-document and long-dialogue summarization tasks: GovReport, QMSum, and arXiv. Experiment results show that DYLE outperforms all existing methods on GovReport and QMSum, with gains up to 6.1 ROUGE, while yielding strong results on arXiv. Further analysis shows that the proposed dynamic weights provide interpretability of our generation process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Ziming Mao (14 papers)
  2. Chen Henry Wu (17 papers)
  3. Ansong Ni (17 papers)
  4. Yusen Zhang (30 papers)
  5. Rui Zhang (1140 papers)
  6. Tao Yu (282 papers)
  7. Budhaditya Deb (11 papers)
  8. Chenguang Zhu (100 papers)
  9. Ahmed H. Awadallah (7 papers)
  10. Dragomir Radev (98 papers)
Citations (53)

Summary

We haven't generated a summary for this paper yet.