Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning (1903.10258v3)

Published 25 Mar 2019 in cs.CV

Abstract: In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning at search time. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. Compared to the state-of-the-art pruning methods, we have demonstrated superior performances on MobileNet V1/V2 and ResNet. Codes are available on https://github.com/liuzechun/MetaPruning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zechun Liu (48 papers)
  2. Haoyuan Mu (4 papers)
  3. Xiangyu Zhang (329 papers)
  4. Zichao Guo (15 papers)
  5. Xin Yang (320 papers)
  6. Tim Kwang-Ting Cheng (3 papers)
  7. Jian Sun (416 papers)
Citations (527)

Summary

  • The paper proposes MetaPruning, a meta-learning approach that trains a PruningNet to generate weights for various pruned architectures without iterative fine-tuning.
  • It employs an evolutionary search leveraging the trained PruningNet, enabling efficient and flexible channel pruning under constraints like FLOPs.
  • Experiments on MobileNet and ResNet models demonstrate significant accuracy improvements, achieving up to 6.6% gain over state-of-the-art methods.

MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning

The paper proposes a novel approach named MetaPruning, which leverages meta-learning for automatic channel pruning in deep neural networks. This method intends to enhance the efficiency and efficacy of the pruning process, addressing the limitations inherent in traditional pruning approaches. The key innovation lies in the introduction of a meta network, termed PruningNet, which generates weight parameters for pruned structures, facilitating a streamlined and flexible pruning procedure.

Methodology

MetaPruning involves two primary steps:

  1. Training the PruningNet: The PruningNet is a meta network that learns to generate weights for various candidate pruned network structures. The authors employ stochastic structure sampling during training, allowing PruningNet to adapt to different pruning configurations. This approach eliminates the need for iterative weight finetuning, thereby significantly enhancing efficiency.
  2. Evolutionary Search for Pruned Networks: Once trained, the PruningNet facilitates an evolutionary search to identify optimal pruning structures under given constraints. This search benefits from the PruningNet's ability to produce reliable weight predictions for any candidate network, allowing quick evaluation without retraining.

Results and Implications

The authors demonstrate MetaPruning's efficacy on MobileNet V1/V2 and ResNet architectures. The results show an improvement in accuracy: up to 6.6% higher than MobileNet V1 and up to 3.7% higher than MobileNet V2, under fixed FLOPs conditions. The ResNet experiments further confirm the superiority of MetaPruning, achieving higher performance than state-of-the-art methods like AMC and NetAdapt.

The implications of this advancement are notable. MetaPruning reduces human intervention by automating hyperparameter tuning and accommodates various constraints such as FLOPs and latency. It introduces a more nuanced understanding of channel pruning by emphasizing the importance of optimal structure identification over merely retaining significant weights.

Theoretical and Practical Impact

Theoretically, MetaPruning contributes to the ongoing discourse on neural architecture search (NAS) by integrating meta-learning and channel pruning. This integration broadens the horizon for developing more adaptive and resource-efficient neural networks.

Practically, the ability to extend MetaPruning to diverse models and constraints makes it a valuable tool for real-world applications where computational efficiency is critical. The reduction in manual intervention and computational overhead is particularly beneficial for deploying models on edge devices with limited resources.

Future Directions

The MetaPruning framework opens several avenues for future research:

  • Extension to Other Architectures: Extending MetaPruning to more complex architectures and understanding its adaptability across different domains can enhance its applicability.
  • Refinement of the Evolutionary Search: While evolutionary search offers flexibility, exploring alternative search strategies could yield further improvements in speed and accuracy.
  • Integration with Other Optimization Techniques: Combining MetaPruning with techniques like quantization or sparsity constraints may lead to more comprehensive model optimization strategies.

In summary, MetaPruning presents a robust and innovative approach to channel pruning, offering both theoretical insights and practical solutions for neural network optimization. Its integration of meta-learning for network adaptation represents a meaningful step toward more efficient machine learning model deployment.