Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Focused Study on Sequence Length for Dialogue Summarization (2209.11910v2)

Published 24 Sep 2022 in cs.CL and cs.HC

Abstract: Output length is critical to dialogue summarization systems. The dialogue summary length is determined by multiple factors, including dialogue complexity, summary objective, and personal preferences. In this work, we approach dialogue summary length from three perspectives. First, we analyze the length differences between existing models' outputs and the corresponding human references and find that summarization models tend to produce more verbose summaries due to their pretraining objectives. Second, we identify salient features for summary length prediction by comparing different model settings. Third, we experiment with a length-aware summarizer and show notable improvement on existing models if summary length can be well incorporated. Analysis and experiments are conducted on popular DialogSum and SAMSum datasets to validate our findings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bin Wang (750 papers)
  2. Chen Zhang (403 papers)
  3. Chengwei Wei (17 papers)
  4. Haizhou Li (286 papers)
Citations (7)
Github Logo Streamline Icon: https://streamlinehq.com