Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wanda++: Pruning Large Language Models via Regional Gradients (2503.04992v4)

Published 6 Mar 2025 in cs.LG, cs.AI, and cs.CL

Abstract: LLMs pruning seeks to remove unimportant weights for inference speedup with minimal accuracy impact. However, existing methods often suffer from accuracy degradation without full-model sparsity-aware fine-tuning. This paper presents Wanda++, a novel pruning framework that outperforms the state-of-the-art methods by utilizing decoder-block-level \textbf{regional} gradients. Specifically, Wanda++ improves the pruning score with regional gradients for the first time and proposes an efficient regional optimization method to minimize pruning-induced output discrepancies between the dense and sparse decoder output. Notably, Wanda++ improves perplexity by up to 32\% over Wanda in the LLMing task and generalizes effectively to downstream tasks. Moreover, despite updating weights with regional optimization, Wanda++ remains orthogonal to sparsity-aware fine-tuning, further reducing perplexity with LoRA in great extend. Our approach is lightweight, pruning a 7B LLaMA model in under 10 minutes on a single H100 GPU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Yifan Yang (578 papers)
  2. Kai Zhen (18 papers)
  3. Bhavana Ganesh (5 papers)
  4. Aram Galstyan (142 papers)
  5. Goeric Huybrechts (15 papers)
  6. Markus Müller (114 papers)
  7. Jonas M. Kübler (10 papers)
  8. Rupak Vignesh Swaminathan (10 papers)
  9. Athanasios Mouchtaris (31 papers)
  10. Sravan Babu Bodapati (7 papers)
  11. Nathan Susanj (12 papers)
  12. Zheng Zhang (488 papers)
  13. Jack FitzGerald (11 papers)
  14. Abhishek Kumar (172 papers)