Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Faster CNNs with Direct Sparse Convolutions and Guided Pruning (1608.01409v5)

Published 4 Aug 2016 in cs.CV

Abstract: Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1--7.3$\times$ convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.

Faster CNNs with Direct Sparse Convolutions and Guided Pruning: A Summary

This paper introduces novel methodologies to enhance the computational efficiency of Convolutional Neural Networks (CNNs) through direct sparse convolutions and guided pruning strategies. The authors aim to address the challenge of CNNs' excessive parameter count, which traditionally results in substantial computational overhead, particularly in the convolution layers that dominate CNNs' processing time.

Key Contributions

  1. Direct Sparse Convolutions: The authors propose a direct sparse convolution technique as a core advancement. This method reformulates sparse convolutions as sparse-matrix-dense-matrix multiplications without the usual overhead of lowering input tensors to matrices—a process noted to reduce arithmetic intensity and efficiency. This approach allows the convolution operations to maintain a high arithmetic intensity by using a "virtual" dense matrix, enhancing data reuse especially in multi-channel scenarios.
  2. Performance Modelling: A sophisticated performance model is developed to predict speedup potentials and guide the pruning process. The model uses the operational roofline to calculate potential speed improvements depending on the non-zero density of the sparse convolution kernels and the characteristics of specific processor architectures. Notably, the model demonstrates that even moderate sparsity in the range of 70% can facilitate substantial speed increases using the devised methods.
  3. Guided Sparsity Learning (GSL): The paper introduces Guided Sparsity Learning (GSL), an innovative pruning algorithm that strategically focuses on layers and sparsity ranges promising tangible speedup, informed by the performance model. Unlike typical pruning, GSL ceases pruning efforts in layers that fall outside effective sparsity ranges, reallocating effort where maximal speedup is achievable.
  4. Empirical Validation: The methods are empirically validated through experiments on AlexNet and GoogLeNet across diverse computational platforms—Intel Atom, Xeon, and Xeon Phi processors, showing promising speedups (up to 7.3× for AlexNet on the Atom processor) without compromising model accuracy.

Implications and Future Directions

The implications arising from this research are multifold:

  • From a theoretical perspective, this work expands on the potential of direct sparse computation in deep learning frameworks, effectively bridging the gap between pruning-induced model size reduction and actual inference speedup.
  • Practically, the proposed methodologies align with current trends towards deploying CNNs on resource-constrained environments, such as mobile and edge computing, where computational efficiency is paramount.
  • Future Work: While the current implementation focused on direct sparse convolution efficiencies, the authors propose potential extensions incorporating Winograd and FFT-based algorithms to further refine 1×1 convolution efficiencies, currently not addressed by sparsity methods due to inherent low arithmetic intensity.

Overall, this paper makes substantive contributions towards more computationally efficient CNN implementations, establishing a practical approach for systematically leveraging model sparsity for faster inference while maintaining a theoretical underpinning through performance modelling. Through continued applications and optimizations, these advancements promise to significantly enhance the deployment capabilities of deep learning models across an expanded array of hardware platforms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jongsoo Park (26 papers)
  2. Sheng Li (219 papers)
  3. Wei Wen (49 papers)
  4. Ping Tak Peter Tang (16 papers)
  5. Hai Li (159 papers)
  6. Yiran Chen (176 papers)
  7. Pradeep Dubey (31 papers)
Citations (177)
Github Logo Streamline Icon: https://streamlinehq.com