Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness (2211.00294v1)

Published 1 Nov 2022 in cs.CL

Abstract: Despite being able to generate fluent and grammatical text, current Seq2Seq summarization models still suffering from the unfaithful generation problem. In this paper, we study the faithfulness of existing systems from a new perspective of factual robustness which is the ability to correctly generate factual information over adversarial unfaithful information. We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information. The factual robustness analysis on a wide range of current systems shows its good consistency with human judgments on faithfulness. Inspired by these findings, we propose to improve the faithfulness of a model by enhancing its factual robustness. Specifically, we propose a novel training strategy, namely FRSUM, which teaches the model to defend against both explicit adversarial samples and implicit factual adversarial perturbations. Extensive automatic and human evaluation results show that FRSUM consistently improves the faithfulness of various Seq2Seq models, such as T5, BART.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Wenhao Wu (71 papers)
  2. Wei Li (1121 papers)
  3. Jiachen Liu (45 papers)
  4. Xinyan Xiao (41 papers)
  5. Ziqiang Cao (34 papers)
  6. Sujian Li (82 papers)
  7. Hua Wu (191 papers)
Citations (8)