Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Structured Pruning for Deep Convolutional Neural Networks: A survey (2303.00566v2)

Published 1 Mar 2023 in cs.CV
Structured Pruning for Deep Convolutional Neural Networks: A survey

Abstract: The remarkable performance of deep Convolutional neural networks (CNNs) is generally attributed to their deeper and wider architectures, which can come with significant computational costs. Pruning neural networks has thus gained interest since it effectively lowers storage and computational costs. In contrast to weight pruning, which results in unstructured models, structured pruning provides the benefit of realistic acceleration by producing models that are friendly to hardware implementation. The special requirements of structured pruning have led to the discovery of numerous new challenges and the development of innovative solutions. This article surveys the recent progress towards structured pruning of deep CNNs. We summarize and compare the state-of-the-art structured pruning techniques with respect to filter ranking methods, regularization methods, dynamic execution, neural architecture search, the lottery ticket hypothesis, and the applications of pruning. While discussing structured pruning algorithms, we briefly introduce the unstructured pruning counterpart to emphasize their differences. Furthermore, we provide insights into potential research opportunities in the field of structured pruning. A curated list of neural network pruning papers can be found at https://github.com/he-y/Awesome-Pruning . A dedicated website offering a more interactive comparison of structured pruning methods can be found at: https://huggingface.co/spaces/he-yang/Structured-Pruning-Survey .

Structured Pruning for Deep Convolutional Neural Networks: A Survey

The paper “Structured Pruning for Deep Convolutional Neural Networks: A Survey” authored by Yang He and Lingao Xiao, offers an in-depth exploration of structured pruning techniques applied to CNNs. The proliferation of CNN architectures known for their computational and storage demands has spurred interest in pruning methods that reduce these demands while maintaining performance. This survey meticulously examines recent developments, categorizing them based on methodologies and discussing their implications.

Overview

Structured pruning stands apart from weight pruning by focusing on removing entire structures such as filters or layers, which aligns more effectively with modern hardware requirements. The paper lays out a taxonomy of structured pruning techniques, offering a comprehensive evaluation of each category: weight-dependent, activation-based, regularization methods, optimization tools, dynamic pruning, NAS-based pruning, and extensions.

Methodological Insights

  1. Weight-Dependent Techniques:
    • The survey examines methods that leverage the weights of the model, such as Filter Norm and Filter Correlation. For instance, the L1-norm is commonly employed to rank filter importance, with lesser-ranked filters being pruned.
  2. Activation-Based Approaches:
    • These methods use activation maps to assess the importance of channels. These approaches are divided into techniques focusing on the current layer, adjacent layers, and all layers jointly, aiming to minimize the reconstruction error or leverage the discriminative power of the network.
  3. Regularization Methods:
    • Regularization involves introducing sparsity by applying penalties to BN parameters, extra parameters, or filters. Techniques under this category often employ Group Lasso or similar regularizers.
  4. Optimization Tools:
    • Taylor Expansion and Variational Bayesian methods are highlighted, with the latter optimizing posterior distributions to assess redundancy. Strategies like ADMM and Bayesian Optimization are also discussed for their capacity to handle the sparsity through structured constraints.
  5. Dynamic Pruning:
    • Dynamic methods adjust pruning strategies during training or inference to best leverage model capacity based on real-time requirements and input complexity.
  6. NAS-Based Pruning:
    • Leveraging Neural Architecture Search for pruning allows for the automated discovery of efficient structures, handled through reinforcement learning, gradient-based, or evolutionary algorithms.
  7. Extensions:
    • Extensions to structured pruning include incorporating the Lottery Ticket Hypothesis and combining pruning with other compression techniques like quantization and decomposition.

Implications and Future Directions

The survey not only provides a robust classification and comparison of structured pruning methodologies but also explores their practical and theoretical implications. The research underscores the balance between performance preservation and resource efficiency. Theoretical contributions shed light on challenges such as the interpretability of pruning decisions and the generalization of pruned models.

The paper hints at future research possibilities, including the integration of pruning with emerging architectures like Transformers, application to domain-specific tasks, and attention towards energy-efficient AI. The role of structured pruning in federated and continual learning is emerging as particularly promising, indicating pathways to tackle problems like data privacy and model adaptability.

Conclusion

He and Xiao’s survey presents a thorough examination of structured pruning, framing the discourse for ongoing and future research. They provide a foundational reference for understanding the landscape, challenges, and opportunities associated with efficient neural network deployment. The survey is paramount for researchers aiming to contribute to or leverage structured pruning in the ever-evolving context of deep learning and AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yang He (117 papers)
  2. Lingao Xiao (6 papers)
Citations (77)
Github Logo Streamline Icon: https://streamlinehq.com