Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Abstractiveness of Summarization Models through Calibrated Distillation (2310.13760v2)

Published 20 Oct 2023 in cs.CL

Abstract: Sequence-level knowledge distillation reduces the size of Seq2Seq models for more efficient abstractive summarization. However, it often leads to a loss of abstractiveness in summarization. In this paper, we propose a novel approach named DisCal to enhance the level of abstractiveness (measured by n-gram overlap) without sacrificing the informativeness (measured by ROUGE) of generated summaries. DisCal exposes diverse pseudo summaries with two supervision to the student model. Firstly, the best pseudo summary is identified in terms of abstractiveness and informativeness and used for sequence-level distillation. Secondly, their ranks are used to ensure the student model to assign higher prediction scores to summaries with higher ranks. Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hwanjun Song (44 papers)
  2. Igor Shalyminov (20 papers)
  3. Hang Su (224 papers)
  4. Siffi Singh (7 papers)
  5. Kaisheng Yao (16 papers)
  6. Saab Mansour (32 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.