Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (1811.00250v3)

Published 1 Nov 2018 in cs.CV

Abstract: Previous works utilized ''smaller-norm-less-important'' criterion to prune filters with smaller norm values in a convolutional neural network. In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small. To solve this problem, we propose a novel filter pruning method, namely Filter Pruning via Geometric Median (FPGM), to compress the model regardless of those two requirements. Unlike previous methods, FPGM compresses CNN models by pruning filters with redundancy, rather than those with ''relatively less'' importance. When applied to two image classification benchmarks, our method validates its usefulness and strengths. Notably, on CIFAR-10, FPGM reduces more than 52% FLOPs on ResNet-110 with even 2.69% relative accuracy improvement. Moreover, on ILSVRC-2012, FPGM reduces more than 42% FLOPs on ResNet-101 without top-5 accuracy drop, which has advanced the state-of-the-art. Code is publicly available on GitHub: https://github.com/he-y/filter-pruning-geometric-median

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration

The paper "Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration" by Yang He et al. addresses the longstanding problem of reducing the computational and storage demands of deep Convolutional Neural Networks (CNNs). The authors propose a novel method, named Filter Pruning via Geometric Median (FPGM), which overcomes several limitations inherent in previous norm-based pruning criteria.

The paper begins by critiquing existing norm-based pruning methods, which typically rely on the assumption that filters with smaller norms are less important and can be pruned. The effectiveness of this approach hinges on two critical conditions: large norm deviation among filters and the presence of filters with minimal norm values. These requirements are not consistently met in practice, leading to suboptimal pruning and potential degradation in model performance.

To address these limitations, the authors introduce FPGM, which selects filters based on their redundancy rather than their relative importance. The method employs the geometric median to identify filters whose information can be most efficiently represented by other filters within the same layer. This geometric approach ensures that pruning these filters minimally impacts the overall network performance.

Notably, the paper presents strong numerical results to support the efficacy of FPGM. When applied to image classification benchmarks such as CIFAR-10 and ILSVRC-2012, FPGM achieved significant reductions in FLOPs while either maintaining or even improving accuracy. For instance:

  • On CIFAR-10, FPGM reduced more than 52% of FLOPs on ResNet-110 with a relative accuracy improvement of 2.69%.
  • On ILSVRC-2012, FPGM achieved a reduction of over 42% of FLOPs on ResNet-101 without any drop in top-5 accuracy.

These results underline the robustness and applicability of FPGM across different model architectures and datasets.

Implications and Future Directions

The implications of this work are profound for both theoretical research and practical applications in the AI community. FPGM provides a more reliable and efficient method for model compression, which is crucial for deploying deep CNNs on resource-constrained devices such as mobile phones and embedded systems. The method's ability to retain and even enhance model accuracy while significantly reducing computational complexity represents a notable advance in the field.

From a theoretical perspective, this paper opens new avenues for exploring geometric properties in filter selection and model pruning. The success of the geometric median approach suggests that other geometric and statistical methods could be similarly effective in understanding and optimizing model structures.

Future research could build on these findings by integrating FPGM with other compression techniques such as matrix decomposition or low-precision weights. Such integrations could further enhance model efficiency, making it feasible to deploy even more complex models in real-world applications.

Moreover, expanding the application of FPGM to other types of neural networks, such as Recurrent Neural Networks (RNNs) and Transformer architectures, could provide valuable insights and broaden the scope of its utility.

In summary, this paper presents a well-founded critique of previous norm-based pruning methods and introduces a novel, effective alternative in FPGM. The strong empirical results and the method's potential for broad applicability make this work a significant contribution to the ongoing efforts in neural network optimization and model compression.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yang He (117 papers)
  2. Ping Liu (93 papers)
  3. Ziwei Wang (128 papers)
  4. Zhilan Hu (5 papers)
  5. Yi Yang (855 papers)
Citations (974)
Youtube Logo Streamline Icon: https://streamlinehq.com