Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Progressive Weight Pruning of Deep Neural Networks using ADMM (1810.07378v2)

Published 17 Oct 2018 in cs.LG, cs.CV, cs.NE, and stat.ML

Abstract: Deep neural networks (DNNs) although achieving human-level performance in many domains, have very large model size that hinders their broader applications on edge computing devices. Extensive research work have been conducted on DNN model compression or pruning. However, most of the previous work took heuristic approaches. This work proposes a progressive weight pruning approach based on ADMM (Alternating Direction Method of Multipliers), a powerful technique to deal with non-convex optimization problems with potentially combinatorial constraints. Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates. Therefore, it resolves the accuracy degradation and long convergence time problems when pursuing extremely high pruning ratios. It achieves up to 34 times pruning rate for ImageNet dataset and 167 times pruning rate for MNIST dataset, significantly higher than those reached by the literature work. Under the same number of epochs, the proposed method also achieves faster convergence and higher compression rates. The codes and pruned DNN models are released in the link bit.ly/2zxdlss

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Shaokai Ye (20 papers)
  2. Tianyun Zhang (26 papers)
  3. Kaiqi Zhang (19 papers)
  4. Jiayu Li (100 papers)
  5. Kaidi Xu (85 papers)
  6. Yunfei Yang (26 papers)
  7. Fuxun Yu (39 papers)
  8. Jian Tang (326 papers)
  9. Makan Fardad (19 papers)
  10. Sijia Liu (204 papers)
  11. Xiang Chen (343 papers)
  12. Xue Lin (92 papers)
  13. Yanzhi Wang (197 papers)
Citations (36)