Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPECTRUM: Speaker-Enhanced Pre-Training for Long Dialogue Summarization (2401.17597v1)

Published 31 Jan 2024 in cs.CL

Abstract: Multi-turn dialogues are characterized by their extended length and the presence of turn-taking conversations. Traditional LLMs often overlook the distinct features of these dialogues by treating them as regular text. In this paper, we propose a speaker-enhanced pre-training method for long dialogue summarization, which leverages the inherent structure of multiple-turn dialogues. To support our study, we curate a diverse dataset that includes transcripts from real-world scenarios, movie or TV show transcripts, and dialogues generated by a LLM. We then perform a pre-training, which encompasses the detection of speaker changes, and masked utterance generation. Experimental results of fine-tuned models demonstrate that our model achieves state-of-the-art performance on downstream benchmarks with long context, surpassing baseline models and highlighting the effectiveness of our approach. Our findings highlight the importance of curating pre-training datasets that exhibit diversity and variations in length distribution to ensure effective alignment with downstream datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sangwoo Cho (22 papers)
  2. Kaiqiang Song (32 papers)
  3. Chao Zhao (46 papers)
  4. Xiaoyang Wang (134 papers)
  5. Dong Yu (329 papers)