Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Exploratory Study on Long Dialogue Summarization: What Works and What's Next (2109.04609v1)

Published 10 Sep 2021 in cs.CL

Abstract: Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically exceeds the input limits imposed by recent transformer-based pre-trained models, and the interactive nature of dialogues makes relevant information more context-dependent and sparsely distributed than news articles. In this work, we perform a comprehensive study on long dialogue summarization by investigating three strategies to deal with the lengthy input problem and locate relevant information: (1) extended transformer models such as Longformer, (2) retrieve-then-summarize pipeline models with several dialogue utterance retrieval methods, and (3) hierarchical dialogue encoding models such as HMNet. Our experimental results on three long dialogue datasets (QMSum, MediaSum, SummScreen) show that the retrieve-then-summarize pipeline models yield the best performance. We also demonstrate that the summary quality can be further improved with a stronger retrieval model and pretraining on proper external summarization datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yusen Zhang (30 papers)
  2. Ansong Ni (17 papers)
  3. Tao Yu (282 papers)
  4. Rui Zhang (1138 papers)
  5. Chenguang Zhu (100 papers)
  6. Budhaditya Deb (11 papers)
  7. Asli Celikyilmaz (80 papers)
  8. Ahmed Hassan Awadallah (50 papers)
  9. Dragomir Radev (98 papers)
Citations (50)