Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards thinner convolutional neural networks through Gradually Global Pruning (1703.09916v1)

Published 29 Mar 2017 in cs.CV

Abstract: Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularities, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step, a small percent of neurons were selected and dropped across all layers in the model. We also propose a simple method to eliminate the biases in evaluating the importance of neurons to make the scheme feasible. Compared with layer-wise pruning scheme, our scheme avoid the difficulty in determining the redundancy in each layer and is more effective for deep networks. Our scheme would automatically find a thinner sub-network in original network under a given performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhengtao Wang (6 papers)
  2. Ce Zhu (85 papers)
  3. Zhiqiang Xia (3 papers)
  4. Qi Guo (237 papers)
  5. Yipeng Liu (89 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.