Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reconstruct the Pruned Model without Any Retraining (2407.13331v1)

Published 18 Jul 2024 in cs.LG

Abstract: Structured pruning is a promising hardware-friendly compression technique for LLMs, which is expected to be retraining-free to avoid the enormous retraining cost. This retraining-free paradigm involves (1) pruning criteria to define the architecture and (2) distortion reconstruction to restore performance. However, existing methods often emphasize pruning criteria while using reconstruction techniques that are specific to certain modules or criteria, resulting in limited generalizability. To address this, we introduce the Linear Interpolation-based Adaptive Reconstruction (LIAR) framework, which is both efficient and effective. LIAR does not require back-propagation or retraining and is compatible with various pruning criteria and modules. By applying linear interpolation to the preserved weights, LIAR minimizes reconstruction error and effectively reconstructs the pruned output. Our evaluations on benchmarks such as GLUE, SQuAD, WikiText, and common sense reasoning show that LIAR enables a BERT model to maintain 98% accuracy even after removing 50% of its parameters and achieves top performance for LLaMA in just a few minutes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Pingjie Wang (9 papers)
  2. Ziqing Fan (13 papers)
  3. Shengchao Hu (19 papers)
  4. Zhe Chen (237 papers)
  5. Yanfeng Wang (211 papers)
  6. Yu Wang (939 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets