Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks (1909.08174v1)

Published 18 Sep 2019 in cs.CV, cs.LG, and eess.IV

Abstract: Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). In this work, we propose a global filter pruning algorithm called Gate Decorator, which transforms a vanilla CNN module by multiplying its output by the channel-wise scaling factors, i.e. gate. When the scaling factor is set to zero, it is equivalent to removing the corresponding filter. We use Taylor expansion to estimate the change in the loss function caused by setting the scaling factor to zero and use the estimation for the global filter importance ranking. Then we prune the network by removing those unimportant filters. After pruning, we merge all the scaling factors into its original module, so no special operations or structures are introduced. Moreover, we propose an iterative pruning framework called Tick-Tock to improve pruning accuracy. The extensive experiments demonstrate the effectiveness of our approaches. For example, we achieve the state-of-the-art pruning ratio on ResNet-56 by reducing 70% FLOPs without noticeable loss in accuracy. For ResNet-50 on ImageNet, our pruned model with 40% FLOPs reduction outperforms the baseline model by 0.31% in top-1 accuracy. Various datasets are used, including CIFAR-10, CIFAR-100, CUB-200, ImageNet ILSVRC-12 and PASCAL VOC 2011. Code is available at github.com/youzhonghui/gate-decorator-pruning

Essay on "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks"

The paper "Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks" introduces a sophisticated technique aimed at improving the efficiency of CNNs. The authors propose a novel global filter pruning algorithm named Gate Decorator, which fundamentally involves a transformation of CNN modules by employing channel-wise scaling factors, referred to as gates. The research focuses on identifying and removing unimportant filters across the entire network, rather than layer-by-layer, simplifying the pruning process.

Methodological Approach

The proposed Gate Decorator algorithm utilizes Taylor expansion to estimate the impact of setting the scaling factors to zero, thereby pruning the filters. This approach solves a critical challenge in filter pruning: determining the global filter importance ranking (GFIR). The Gate Decorator algorithm operates by applying gates to the CNN's feature maps, determining their significance via gradient information obtained during training. This is an advanced application of calculus in neural network optimization, optimizing network dimensions while preserving performance accuracy.

Additionally, the authors introduce an iterative pruning framework called Tick-Tock. This framework involves alternating training phases—referred to as Tick and Tock phases—to refine the model continuously. The Tick phase calculates the filter importance scores and adjusts the model’s internal covariate shift. In contrast, the Tock phase fine-tunes the network while imposing a sparse constraint on the gates to more accurately prune filters.

Quantitative Results

Empirical evaluations indicate that the proposed approach achieves impressive results across various benchmarks. For instance, the method achieves a 70% reduction in FLOPs for the ResNet-56 model on the CIFAR-10 dataset without noticeable loss in accuracy. Remarkably, for ResNet-50 on the ImageNet dataset, the method results in a 40% reduction in FLOPs while gaining 0.31% in top-1 accuracy over the baseline. These results underscore the efficacy of the Gate Decorator in optimizing computational efficiency without sacrificing model performance.

Implications and Future Directions

The significant reductions in model size and complexity, achieved without substantial accuracy loss, have immediate implications for deploying CNNs on resource-constrained platforms like mobile and IoT devices. The absence of special structures or operations post-pruning ensures compatibility with existing hardware, maintaining usability and accessibility.

Theoretically, the correspondence between global filter pruning and neural architecture search (NAS) enhances the relevance of this research. The approach suggests a paradigm where filter pruning informs task-specific network architecture optimization, potentially guiding NAS methods towards more efficient search strategies. This aligns with the trend of tailoring neural network architectures to specific deployment constraints and tasks, pushing the boundaries of network efficiency.

Conclusion

The Gate Decorator represents a substantive advancement in the field of CNN optimization. Through its algorithmic efficiency and robust framework, it addresses the computational challenges posed by deep networks, paving the way for broader and more effective CNN applications. Future work may explore the integration of Gate Decorator principles with emerging NAS techniques, as well as extending its applicability to other types of neural networks and tasks beyond the typical vision applications. This research offers a promising trajectory for future CNN enhancements, particularly concerning computationally efficient architectures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zhonghui You (2 papers)
  2. Kun Yan (23 papers)
  3. Jinmian Ye (8 papers)
  4. Meng Ma (15 papers)
  5. Ping Wang (289 papers)
Citations (228)