Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequence Level Contrastive Learning for Text Summarization (2109.03481v4)

Published 8 Sep 2021 in cs.CL

Abstract: Contrastive learning models have achieved great success in unsupervised visual representation learning, which maximize the similarities between feature representations of different views of the same image, while minimize the similarities between feature representations of views of different images. In text summarization, the output summary is a shorter form of the input document and they have similar meanings. In this paper, we propose a contrastive learning model for supervised abstractive text summarization, where we view a document, its gold summary and its model generated summaries as different views of the same mean representation and maximize the similarities between them during training. We improve over a strong sequence-to-sequence text generation model (i.e., BART) on three different summarization datasets. Human evaluation also shows that our model achieves better faithfulness ratings compared to its counterpart without contrastive objectives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shusheng Xu (11 papers)
  2. Xingxing Zhang (65 papers)
  3. Yi Wu (171 papers)
  4. Furu Wei (291 papers)
Citations (85)

Summary

We haven't generated a summary for this paper yet.