Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization (2406.15524v2)

Published 21 Jun 2024 in cs.CL and cs.LG

Abstract: This work suggests fundamentally rethinking the current practice of pruning LLMs. The way it is done is by divide and conquer: split the model into submodels, sequentially prune them, and reconstruct predictions of the dense counterparts on small calibration data one at a time; the final model is obtained simply by putting the resulting sparse submodels together. While this approach enables pruning under memory constraints, it generates high reconstruction errors. In this work, we first present an array of reconstruction techniques that can significantly reduce this error by more than $90\%$. Unwittingly, however, we discover that minimizing reconstruction error is not always ideal and can overfit the given calibration data, resulting in rather increased language perplexity and poor performance at downstream tasks. We find out that a strategy of self-generating calibration data can mitigate this trade-off between reconstruction and generalization, suggesting new directions in the presence of both benefits and pitfalls of reconstruction for pruning LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sungbin Shin (3 papers)
  2. Wonpyo Park (14 papers)
  3. Jaeho Lee (51 papers)
  4. Namhoon Lee (19 papers)

Summary

We haven't generated a summary for this paper yet.