Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Efficient Convolutional Networks through Network Slimming (1708.06519v1)

Published 22 Aug 2017 in cs.CV, cs.AI, and cs.LG

Abstract: The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhuang Liu (63 papers)
  2. Jianguo Li (59 papers)
  3. Zhiqiang Shen (172 papers)
  4. Gao Huang (178 papers)
  5. Shoumeng Yan (7 papers)
  6. Changshui Zhang (81 papers)
Citations (2,266)

Summary

Learning Efficient Convolutional Networks through Network Slimming

The paper "Learning Efficient Convolutional Networks through Network Slimming" presents a novel approach for optimizing Convolutional Neural Networks (CNNs) to address the significant computational demands that hinder their deployment in resource-constrained environments such as mobile devices or Internet of Things (IoT) platforms. The proposed method, termed as "network slimming," introduces an efficient learning scheme that leverages channel-level sparsity to reduce model size, decrease run-time memory, and lower computational operations while maintaining comparable accuracy to the original models.

Methodology

The core concept in network slimming is the imposition of L1 regularization on the scaling factors within Batch Normalization (BN) layers. During the training phase, this L1 regularization forces many of these scaling factors to approach zero, effectively identifying unimportant channels that can be pruned post-training. This pruning process results in a narrower and consequently more efficient network. The steps in the network slimming process include:

  1. Training with Sparsity Regularization: The training objective incorporates a sparsity-induced penalty on the scaling factors associated with each channel in BN layers.
  2. Channel Pruning: After training, channels with scaling factors below a certain threshold are pruned.
  3. Fine-tuning: The pruned model is fine-tuned to recover any potential loss in accuracy due to the pruning process.

This method stands out by imposing minimal overhead during training, requiring no modifications to the existing CNN architectures, and no dependence on specialized hardware or software accelerators for execution, ensuring broad applicability.

Experimental Results

The effectiveness of the proposed approach is empirically validated on several benchmark datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet, using state-of-the-art CNN architectures such as VGGNet, ResNet, and DenseNet. The experimental results demonstrate significant reductions in model size and computing operations:

  • VGGNet: Achieved a 20x reduction in model size and a 5x reduction in computing operations on CIFAR-10.
  • DenseNet-40: Demonstrated up to 65.2% reduction in parameters and 55.0% reduction in FLOPs while maintaining competitive test errors.
  • ResNet-164: Showed a decrease in test errors on CIFAR datasets with up to 44.9% reduction in FLOPs.

Additionally, on the ImageNet dataset, the compressed VGG-A model achieved an 82.5% reduction in parameters and a 30.4% reduction in FLOPs without any accuracy loss.

Implications and Future Directions

Practical Implications:

Network slimming has substantial practical implications, particularly for deploying CNNs on devices with limited computational resources. By significantly reducing model size and computational demands, this method facilitates the integration of complex CNN-based solutions in real-world applications like mobile computing, autonomous systems, and embedded systems.

Theoretical Implications:

From a theoretical standpoint, the method reinforces the utility of sparsity regularization in improving model efficiency. It also aligns with the broader trend of exploring structured sparsity and its impact on neural network performance and generalization capabilities.

Future Developments:

Future research could explore extending the principles of network slimming to other types of neural architectures, including Recurrent Neural Networks (RNNs) and Transformers. Additionally, further investigation into adaptive regularization techniques that dynamically adjust the sparsity-induced penalty during training could yield even more efficient and robust models. Exploring the implications of various sparsity levels on network interpretability and feature localization within CNNs provides another promising avenue for future research.

Overall, network slimming represents a significant advancement in the field of neural network optimization, offering a practical and effective solution for the deployment of CNNs in resource-constrained environments.

X Twitter Logo Streamline Icon: https://streamlinehq.com