Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization (2311.09335v3)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: Despite the remarkable performance of generative LLMs on abstractive summarization, they face two significant challenges: their considerable size and tendency to hallucinate. Hallucinations are concerning because they erode reliability and raise safety issues. Pruning is a technique that reduces model size by removing redundant weights, enabling more efficient sparse inference. Pruned models yield downstream task performance comparable to the original, making them ideal alternatives when operating on a limited budget. However, the effect that pruning has upon hallucinations in abstractive summarization with LLMs has yet to be explored. In this paper, we provide an extensive empirical study across five summarization datasets, two state-of-the-art pruning methods, and five instruction-tuned LLMs. Surprisingly, we find that hallucinations are less prevalent from pruned LLMs than the original models. Our analysis suggests that pruned models tend to depend more on the source document for summary generation. This leads to a higher lexical overlap between the generated summary and the source document, which could be a reason for the reduction in hallucination risk.

Analyzing the Impact of Pruning on Hallucination in LLMs for Abstractive Summarization

The paper "Investigating Hallucinations in Pruned LLMs for Abstractive Summarization" presents an empirical paper addressing the intersection of two pressing issues faced by LLMs: the computational burden imposed by their size and the propensity to generate hallucinated content. The research explores whether pruning—a technique traditionally employed to reduce the size of LLMs—can mitigate hallucination in abstractive summarization tasks. This paper is particularly pertinent given the increasing deployment of LLMs in resource-constrained environments.

Key Findings and Methodology

The authors conducted a comprehensive evaluation across five summarization datasets using two prominent pruning methods (SparseGPT and Wanda) on five instruction-tuned LLMs. A salient finding of this paper is that pruning may indeed lead to a reduction in hallucinatory outputs. This finding challenges the prevailing notion that model size and performance are positively correlated, suggesting instead that strategic sparsity in model parameters can yield qualitative benefits beyond efficient computation.

Several metrics were employed to quantify hallucination risk, including HaRiM\textsuperscript{+}, SummaC\textsubscript{ZS}, and SummaC\textsubscript{Conv}. Interestingly, the results indicate that pruned models exhibit lower hallucination risk compared to their non-pruned counterparts. The decrease in hallucinatory content was consistent across different levels of sparsity, with metrics showing that models with increased lexical overlap between source documents and generated summaries were less prone to hallucinations.

Quantitative results from human evaluation further corroborate this reduction in hallucination risk, highlighting that summaries generated by pruned models often omit fewer critical details and are more semantically aligned with the source content.

Implications and Future Directions

The implications of this research are manifold. Practically, this paper suggests that deploying pruned LLMs could enhance the reliability of generated content in critical areas such as legal documentation or medical reporting, where accuracy is paramount. Theoretically, the notion that pruning encourages models to leverage source content more heavily posits a compelling argument for revising model training strategies, potentially incorporating pruning into initial model design to harness both efficiency and accuracy.

The research opens several avenues for future investigation. One potential trajectory is extending the exploration of pruning effects on other natural language processing tasks, such as open-domain question answering and machine translation. Moreover, these findings provoke further inquiry into understanding how different types of hallucinations (e.g., factual inaccuracies versus stylistic deviations) respond to varying pruning regimes.

In conclusion, this paper provides valuable insights into how pruning can serve as a dual-purpose tool that not only reduces computational overhead but also enhances output fidelity, thereby contributing shape to ongoing debates in the NLP community regarding optimal model design and deployment.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. George Chrysostomou (9 papers)
  2. Zhixue Zhao (23 papers)
  3. Miles Williams (5 papers)
  4. Nikolaos Aletras (72 papers)
Citations (7)
Github Logo Streamline Icon: https://streamlinehq.com