Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Abstractive Text Summarization with History Aggregation (1912.11046v1)

Published 24 Dec 2019 in cs.CL

Abstract: Recent neural sequence to sequence models have provided feasible solutions for abstractive summarization. However, such models are still hard to tackle long text dependency in the summarization task. A high-quality summarization system usually depends on strong encoder which can refine important information from long input texts so that the decoder can generate salient summaries from the encoder's memory. In this paper, we propose an aggregation mechanism based on the Transformer model to address the challenge of long text representation. Our model can review history information to make encoder hold more memory capacity. Empirically, we apply our aggregation mechanism to the Transformer model and experiment on CNN/DailyMail dataset to achieve higher quality summaries compared to several strong baseline models on the ROUGE metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Pengcheng Liao (7 papers)
  2. Chuang Zhang (78 papers)
  3. Xiaojun Chen (100 papers)
  4. Xiaofei Zhou (14 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.