Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning by Semantic Similarity Makes Abstractive Summarization Better (2002.07767v2)

Published 18 Feb 2020 in cs.CL

Abstract: By harnessing pre-trained LLMs, summarization models had rapid progress recently. However, the models are mainly assessed by automatic evaluation metrics such as ROUGE. Although ROUGE is known for having a positive correlation with human evaluation scores, it has been criticized for its vulnerability and the gap between actual qualities. In this paper, we compare the generated summaries from recent LM, BART, and the reference summaries from a benchmark dataset, CNN/DM, using a crowd-sourced human evaluation metric. Interestingly, model-generated summaries receive higher scores relative to reference summaries. Stemming from our experimental results, we first argue the intrinsic characteristics of the CNN/DM dataset, the progress of pre-trained LLMs, and their ability to generalize on the training data. Finally, we share our insights into the model-generated summaries and presents our thought on learning methods for abstractive summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Wonjin Yoon (13 papers)
  2. Yoon Sun Yeo (1 paper)
  3. Minbyul Jeong (18 papers)
  4. Bong-Jun Yi (1 paper)
  5. Jaewoo Kang (83 papers)
Citations (16)
Youtube Logo Streamline Icon: https://streamlinehq.com