Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StructADMM: A Systematic, High-Efficiency Framework of Structured Weight Pruning for DNNs (1807.11091v3)

Published 29 Jul 2018 in cs.NE, cs.CV, and cs.LG

Abstract: Weight pruning methods of DNNs have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior work the pruning rate (degree of sparsity) and GPU acceleration are limited (to less than 50%) when accuracy needs to be maintained. In this work,we overcome these limitations by proposing a unified, systematic framework of structured weight pruning for DNNs. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. Without loss of accuracy on the AlexNet model, we achieve 2.58X and 3.65X average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 3.15X and 8.52X when allowing a moderate ac-curacy loss of 2%. In this case the model compression for convolutional layers is 15.0X, corresponding to 11.93X measured CPU speedup. Our experiments on ResNet model and on other data sets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Tianyun Zhang (26 papers)
  2. Shaokai Ye (20 papers)
  3. Kaiqi Zhang (19 papers)
  4. Xiaolong Ma (57 papers)
  5. Ning Liu (199 papers)
  6. Linfeng Zhang (160 papers)
  7. Jian Tang (326 papers)
  8. Kaisheng Ma (46 papers)
  9. Xue Lin (92 papers)
  10. Makan Fardad (19 papers)
  11. Yanzhi Wang (197 papers)
Citations (49)