Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CO2Sum:Contrastive Learning for Factual-Consistent Abstractive Summarization (2112.01147v2)

Published 2 Dec 2021 in cs.CL

Abstract: Generating factual-consistent summaries is a challenging task for abstractive summarization. Previous works mainly encode factual information or perform post-correct/rank after decoding. In this paper, we provide a factual-consistent solution from the perspective of contrastive learning, which is a natural extension of previous works. We propose CO2Sum (Contrastive for Consistency), a contrastive learning scheme that can be easily applied on sequence-to-sequence models for factual-consistent abstractive summarization, proving that the model can be fact-aware without modifying the architecture. CO2Sum applies contrastive learning on the encoder, which can help the model be aware of the factual information contained in the input article, or performs contrastive learning on the decoder, which makes the model to generate factual-correct output summary. What's more, these two schemes are orthogonal and can be combined to further improve faithfulness. Comprehensive experiments on public benchmarks demonstrate that CO2Sum improves the faithfulness on large pre-trained LLMs and reaches competitive results compared to other strong factual-consistent summarization baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wei Liu (1135 papers)
  2. Huanqin Wu (2 papers)
  3. Wenjing Mu (1 paper)
  4. Zhen Li (334 papers)
  5. Tao Chen (397 papers)
  6. Dan Nie (2 papers)
Citations (14)