Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SueNes: A Weakly Supervised Approach to Evaluating Single-Document Summarization via Negative Sampling (2005.06377v3)

Published 13 May 2020 in cs.CL, cs.IR, and cs.LG

Abstract: Canonical automatic summary evaluation metrics, such as ROUGE, focus on lexical similarity which cannot well capture semantics nor linguistic quality and require a reference summary which is costly to obtain. Recently, there have been a growing number of efforts to alleviate either or both of the two drawbacks. In this paper, we present a proof-of-concept study to a weakly supervised summary evaluation approach without the presence of reference summaries. Massive data in existing summarization datasets are transformed for training by pairing documents with corrupted reference summaries. In cross-domain tests, our strategy outperforms baselines with promising improvements, and show a great advantage in gauging linguistic qualities over all metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Forrest Sheng Bao (16 papers)
  2. Hebi Li (5 papers)
  3. Ge Luo (8 papers)
  4. Minghui Qiu (58 papers)
  5. Yinfei Yang (73 papers)
  6. Youbiao He (7 papers)
  7. Cen Chen (81 papers)
Citations (4)