Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Simple and Effective Pruning Approach for Large Language Models (2306.11695v3)

Published 20 Jun 2023 in cs.CL, cs.AI, and cs.LG

Abstract: As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight reconstruction problem reliant on second-order information, which may also be computationally expensive. In this paper, we introduce a novel, straightforward yet effective pruning method, termed Wanda (Pruning by Weights and activations), designed to induce sparsity in pretrained LLMs. Motivated by the recent observation of emergent large magnitude features in LLMs, our approach prunes weights with the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis. Notably, Wanda requires no retraining or weight update, and the pruned LLM can be used as is. We conduct a thorough evaluation of our method Wanda on LLaMA and LLaMA-2 across various language benchmarks. Wanda significantly outperforms the established baseline of magnitude pruning and performs competitively against recent method involving intensive weight update. Code is available at https://github.com/locuslab/wanda.

A Simple and Effective Pruning Approach for LLMs

The paper under review presents a novel method, termed Wanda, designed to effectively prune LLMs without the need for retraining or computationally expensive weight updates. This method targets the inherent challenges posed by current pruning and sparsity techniques when applied to LLMs with their billions of parameters.

Pruning Context and Motivation

LLMs like GPT-3 and GPT-4 have profoundly impacted the field of NLP with their superior performance. However, due to their substantial size and the associated computational costs, reducing these models' footprint without compromising performance is a critical area of research. Traditionally, network pruning and quantization are two primary approaches for model compression. While network pruning sets certain weights to zero, reducing the overall parameter count, it typically necessitates retraining to recover the performance losses, which is impractical for LLMs.

The paper highlights the recent advent of emergent large magnitude features in transformer-based models, as observed in models surpassing 6 billion parameters. These features are significantly larger in magnitude and crucial for the predictive capabilities of LLMs. Despite this, conventional pruning methods like magnitude pruning or those requiring second-order information for weight updates fall short due to their high computational demands or ineffectiveness without retraining.

Wanda Approach

Wanda distinguishes itself by focusing on pruning weights based on a novel metric combining weight magnitudes and input activation norms. Specifically, each weight's importance is evaluated by the product of its magnitude and the corresponding input activation norm. This metric is grounded in the observation that input activations can differ significantly in scale and must be considered for effective pruning.

Key Points of the Wanda Method:

  • Pruning Metric: Defined by Sij=WijXj2S_{ij} = |W_{ij}| \cdot \|X_j\|_2. This metric evaluates each weight by considering both its magnitude and the norm of the corresponding input activation.
  • Comparison Group: Weights are compared on a per-output basis, rather than globally or per-layer, to maintain a balanced pruning ratio across output features.
  • Implementation: Computationally efficient, requiring only a single forward pass through the model and minimal memory overhead.

Empirical Evaluation

The effectiveness of Wanda is demonstrated through comprehensive experiments on the LLaMA and LLaMA-2 model families, including LLaMA-7B, 13B, 30B, 65B, and LLaMA-2-7B, 13B, 70B. The models were assessed on several language benchmarks, performing zero-shot and few-shot evaluations.

Zero-Shot Performance:

  • Unstructured Pruning: At 50% sparsity, Wanda consistently outperforms traditional magnitude pruning and competes favorably with SparseGPT. Notably, the results show that sparse LLMs can achieve comparable performance to their dense counterparts.
  • Structured Pruning: For structured 4:8 and 2:4 sparsity patterns, Wanda again outperforms magnitude pruning and provides results on par with SparseGPT, highlighting its robustness and efficiency.

LLMing Perplexity:

  • Wanda delivers competitive performance in preserving the perplexity of pruned models. For unstructured 50% sparsity, it achieves comparable results to SparseGPT, while avoiding the computational overhead associated with weight updates.

Analysis and Robustness

The paper further explores the robustness of Wanda under varying calibration datasets, showing that it remains effective with minimal calibration data. It also explores the impact of fine-tuning pruned models, indicating that most performance losses can be recuperated with either LoRA fine-tuning or full-parameter dense fine-tuning.

Implications and Future Directions

The proposed Wanda method has significant implications for the deployment and democratization of LLMs. By enabling effective pruning without retraining, it facilitates the use of these models in environments with limited computational resources. This approach opens avenues for further research in pruning LLMs at higher sparsities and exploring its applicability in real-time sparse training.

In conclusion, Wanda offers a promising, efficient, and practical solution for pruning LLMs, contributing to the broader goal of making high-performing LLMs more accessible and sustainable. Further research could focus on extending Wanda to dynamic sparsity patterns and integrating it within sparse training paradigms, potentially revolutionizing how large models are trained and deployed.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (93)
  1. Intriguing properties of quantization at scale. In NeurIPS, 2023.
  2. Noiseout: A simple way to prune neural networks. arXiv preprint arXiv:1611.06211, 2016.
  3. Rethinking the role of scale for in-context learning: An interpretability-based case study at 66 billion scale. In Association for Computational Linguistics (ACL), 2023.
  4. Quantease: Optimization-based quantization for language models – an efficient and intuitive algorithm. arXiv preprint arXiv:2309.01885, 2023.
  5. Fast as chita: Neural network pruning with combinatorial optimization. In ICML, 2023.
  6. Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373, 2023.
  7. What is the state of neural network pruning? In Proceedings of Machine Learning and Systems, 2020.
  8. Gpt takes the bar exam. arXiv preprint arXiv:2212.14402, 2022.
  9. Understanding and overcoming the challenges of efficient transformer quantization. arXiv:2109.12948, 2021.
  10. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
  11. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
  12. The lottery ticket hypothesis for pre-trained bert networks. NeurIPS, 2020.
  13. BoolQ: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.
  14. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
  15. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
  16. LLM.int8(): 8-bit matrix multiplication for transformers at scale. In NeurIPS, 2022.
  17. Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078, 2023.
  18. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  19. Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations, 2018.
  20. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  21. Coreset-based neural network compression. In ECCV, 2018.
  22. Christoforos Nalmpantis Elena Voita, Javier Ferrando. Neurons in large language models: Dead, n-gram, positional. arXiv preprint arXiv:2309.04827, 2023.
  23. Rigging the lottery: Making all tickets winners. In ICML, 2020.
  24. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations, 2020.
  25. Depgraph: Towards any structural pruning. In Conference on Computer Vision and Pattern Recognition, 2023.
  26. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR, 2019.
  27. Stabilizing the lottery ticket hypothesis. In ICML, 2020.
  28. Spdy: Accurate pruning with speedup guarantees. In ICML, 2022.
  29. SparseGPT: Massive language models can be accurately pruned in one-shot. In ICML, 2023.
  30. Optimal Brain Compression: A framework for accurate post-training quantization and pruning. In NeurIPS, 2022.
  31. GPTQ: Accurate post-training compression for generative pretrained transformers. In ICLR, 2023a.
  32. Scaling laws for sparsely-connected foundation models. arXiv preprint arXiv:2309.08520, 2023b.
  33. Why random pruning is all we need to start sparse. In ICML, 2023.
  34. The state of sparsity in deep neural networks. In ICML, 2019.
  35. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021.
  36. Learning both weights and connections for efficient neural networks. In NeurIPS, 2015.
  37. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In ICLR, 2016.
  38. Optimal brain surgeon and general network pruning. In IEEE International Conference on Neural Networks, 1993.
  39. Measuring massive multitask language understanding. In ICLR, 2021.
  40. Revisiting pruning at initialization through the lens of ramanujan graph? In ICLR, 2023.
  41. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
  42. LoRA: Low-rank adaptation of large language models. In ICLR, 2021.
  43. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
  44. Accelerated sparse neural training: A provable and efficient method to find N:M transposable masks. In NeurIPS, 2021.
  45. Pruning’s effect on generalization through the lens of training and regularization. In NeurIPS, 2022.
  46. Bert busters: Outlier dimensions that disrupt transformers. In ACL Findings, 2021.
  47. Soft threshold weight reparameterization for learnable sparsity. In ICML, 2020.
  48. Accurate neural network pruning requires rethinking sparse optimization. arXiv preprint arXiv:2308.02060, 2023.
  49. A fast post-training pruning framework for transformers. In NeurIPS, 2022.
  50. Optimal brain damage. In NeurIPS, 1989.
  51. Snip: Single-shot network pruning based on connection sensitivity. In ICLR, 2018.
  52. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023.
  53. Sparsity may cry: Let us fail (current) sparse neural networks together! In ICLR, 2023a.
  54. Learning efficient convolutional networks through network slimming. In ICCV, 2017.
  55. Rethinking the value of network pruning. In ICLR, 2019.
  56. A convnet for the 2020s. In CVPR, 2022.
  57. Deja vu: Contextual sparsity for efficient llms at inference time. In ICML, 2023b.
  58. Learning sparse neural networks through l0 regularization. In ICLR, 2018.
  59. Positional artefacts propagate through masked language model embeddings. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, August 2021.
  60. Llm-pruner: On the structural pruning of large language models. arXiv preprint arXiv:2305.11627, 2023.
  61. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
  62. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.
  63. Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378, 2021.
  64. Pruning convolutional neural networks for resource efficient inference. In ICLR, 2017.
  65. Importance estimation for neural network pruning. In CVPR, 2019.
  66. Gradient-free structured pruning with unlabeled data. In International Conference on Machine Learning, 2023.
  67. OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  68. Unmasking the lottery ticket hypothesis: What’s encoded in a winning ticket’s mask? In ICLR, 2023.
  69. AC/DC: Alternating compressed/decompressed training of deep neural networks. In NeurIPS, 2021.
  70. Outliers dimensions that disrupt transformers are driven by frequency. arXiv preprint arXiv:2205.11380, 2022.
  71. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020.
  72. Pruning pre-trained language models with principled importance and self-regularization. In ACL, 2023.
  73. Comparing rewinding and fine-tuning in neural network pruning. In ICLR, 2020.
  74. Winogrande: An adversarial winograd schema challenge at scale. arXiv preprint arXiv:1907.10641, 2019.
  75. Movement pruning: Adaptive sparsity by fine-tuning. In NeurIPS, 2020.
  76. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
  77. Are emergent abilities of large language models a mirage? arXiv preprint arXiv:2304.15004, 2023.
  78. Structural pruning via latency-saliency knapsack. NeurIPS, 2022.
  79. High-throughput generative inference of large language models with a single gpu. In ICML, 2023.
  80. Woodfisher: Efficient second-order approximation for neural network compression. In NeurIPS, 2020.
  81. All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. arXiv:2109.04404, 2021.
  82. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a.
  83. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
  84. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
  85. Emergent abilities of large language models. In Transactions on Machine Learning Research, 2022a.
  86. Outlier suppression: Pushing the limit of low-bit transformer language models. In NeurIPS, 2022b.
  87. Structured pruning learns compact and accurate models. In Association for Computational Linguistics (ACL), 2022.
  88. Smoothquant: Accurate and efficient post-training quantization for large language models. In ICML, 2023.
  89. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
  90. Platon: Pruning large transformer models with upper confidence bound of weight importance. In ICML, 2021.
  91. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
  92. A three-regime model of network pruning. In ICML, 2023.
  93. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mingjie Sun (29 papers)
  2. Zhuang Liu (63 papers)
  3. Anna Bair (4 papers)
  4. J. Zico Kolter (151 papers)
Citations (244)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com