Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods (2004.05531v1)

Published 12 Apr 2020 in cs.LG, cs.CV, and cs.NE

Abstract: To address the large model size and intensive computation requirement of deep neural networks (DNNs), weight pruning techniques have been proposed and generally fall into two categories, i.e., static regularization-based pruning and dynamic regularization-based pruning. However, the former method currently suffers either complex workloads or accuracy degradation, while the latter one takes a long time to tune the parameters to achieve the desired pruning rate without accuracy loss. In this paper, we propose a unified DNN weight pruning framework with dynamically updated regularization terms bounded by the designated constraint, which can generate both non-structured sparsity and different kinds of structured sparsity. We also extend our method to an integrated framework for the combination of different DNN compression tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Tianyun Zhang (26 papers)
  2. Xiaolong Ma (57 papers)
  3. Zheng Zhan (27 papers)
  4. Shanglin Zhou (14 papers)
  5. Minghai Qin (28 papers)
  6. Fei Sun (151 papers)
  7. Yen-Kuang Chen (10 papers)
  8. Caiwen Ding (98 papers)
  9. Makan Fardad (19 papers)
  10. Yanzhi Wang (197 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.