Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs (2310.19347v3)

Published 30 Oct 2023 in cs.CL and cs.AI

Abstract: Despite the recent progress in text summarization made by LLMs, they often generate summaries that are factually inconsistent with original articles, known as "hallucinations" in text generation. Unlike previous small models (e.g., BART, T5), current LLMs make fewer silly mistakes but more sophisticated ones, such as imposing cause and effect, adding false details, overgeneralizing, etc. These hallucinations are challenging to detect through traditional methods, which poses great challenges for improving the factual consistency of text summarization. In this paper, we propose an adversarially DEcoupling method to disentangle the Comprehension and EmbellishmeNT abilities of LLMs (DECENT). Furthermore, we adopt a probing-based efficient training to cover the shortage of sensitivity for true and false in the training process of LLMs. In this way, LLMs are less confused about embellishing and understanding; thus, they can execute the instructions more accurately and have enhanced abilities to distinguish hallucinations. Experimental results show that DECENT significantly improves the reliability of text summarization based on LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Huawen Feng (8 papers)
  2. Yan Fan (12 papers)
  3. Xiong Liu (26 papers)
  4. Ting-En Lin (28 papers)
  5. Zekun Yao (2 papers)
  6. Yuchuan Wu (33 papers)
  7. Fei Huang (408 papers)
  8. Yongbin Li (128 papers)
  9. Qianli Ma (77 papers)
Citations (3)