Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BLK-REW: A Unified Block-based DNN Pruning Framework using Reweighted Regularization Method (2001.08357v2)

Published 23 Jan 2020 in cs.LG, cs.AI, cs.CV, cs.NE, and stat.ML

Abstract: Accelerating DNN execution on various resource-limited computing platforms has been a long-standing problem. Prior works utilize l1-based group lasso or dynamic regularization such as ADMM to perform structured pruning on DNN models to leverage the parallel computing architectures. However, both of the pruning dimensions and pruning methods lack universality, which leads to degraded performance and limited applicability. To solve the problem, we propose a new block-based pruning framework that comprises a general and flexible structured pruning dimension as well as a powerful and efficient reweighted regularization method. Our framework is universal, which can be applied to both CNNs and RNNs, implying complete support for the two major kinds of computation-intensive layers (i.e., CONV and FC layers). To complete all aspects of the pruning-for-acceleration task, we also integrate compiler-based code optimization into our framework that can perform DNN inference in a real-time manner. To the best of our knowledge, it is the first time that the weight pruning framework achieves universal coverage for both CNNs and RNNs with real-time mobile acceleration and no accuracy compromise.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xiaolong Ma (57 papers)
  2. Zhengang Li (31 papers)
  3. Yifan Gong (82 papers)
  4. Tianyun Zhang (26 papers)
  5. Wei Niu (68 papers)
  6. Zheng Zhan (27 papers)
  7. Pu Zhao (82 papers)
  8. Jian Tang (327 papers)
  9. Xue Lin (92 papers)
  10. Bin Ren (136 papers)
  11. Yanzhi Wang (197 papers)
Citations (14)