Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Post-Training Dialogue Summarization using Pseudo-Paraphrasing (2204.13498v1)

Published 28 Apr 2022 in cs.CL

Abstract: Previous dialogue summarization techniques adapt LLMs pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained LLMs (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Qi Jia (42 papers)
  2. Yizhu Liu (9 papers)
  3. Haifeng Tang (20 papers)
  4. Kenny Q. Zhu (50 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.