Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Class-Aware Pruning for Efficient Neural Networks (2312.05875v2)

Published 10 Dec 2023 in cs.AI

Abstract: Deep neural networks (DNNs) have demonstrated remarkable success in various fields. However, the large number of floating-point operations (FLOPs) in DNNs poses challenges for their deployment in resource-constrained applications, e.g., edge devices. To address the problem, pruning has been introduced to reduce the computational cost in executing DNNs. Previous pruning strategies are based on weight values, gradient values and activation outputs. Different from previous pruning solutions, in this paper, we propose a class-aware pruning technique to compress DNNs, which provides a novel perspective to reduce the computational cost of DNNs. In each iteration, the neural network training is modified to facilitate the class-aware pruning. Afterwards, the importance of filters with respect to the number of classes is evaluated. The filters that are only important for a few number of classes are removed. The neural network is then retrained to compensate for the incurred accuracy loss. The pruning iterations end until no filter can be removed anymore, indicating that the remaining filters are very important for many classes. This pruning technique outperforms previous pruning solutions in terms of accuracy, pruning ratio and the reduction of FLOPs. Experimental results confirm that this class-aware pruning technique can significantly reduce the number of weights and FLOPs, while maintaining a high inference accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations (ICLR), 2021.
  2. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
  3. K. Lee, H. Kim, H. Lee, and D. Shin, “Flexible group-level pruning of deep neural networks for on-device machine learning,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2020.
  4. R. Petri, G. L. Zhang, Y. Chen, U. Schlichtmann, and B. Li, “Powerpruning: Selecting weights and activations for power-efficient neural network acceleration,” in Design Automation Conference (DAC, 2023.
  5. O. Spantidi and I. Anagnostopoulos, “Automated energy-efficient dnn compression under fine-grain accuracy constraints,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2023.
  6. W. Sun, G. L. Zhang, H. Gu, B. Li, and U. Schlichtmann, “Class-based quantization for neural networks,” in Design, Automation & Test in Europe Conference & Exhibition DATE, 2023.
  7. H. Jang, J. Jung, J. Song, J. Yu, Y. Kim, and J. Lee, “Pipe-BD: Pipelined parallel blockwise distillation,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2023.
  8. C. De la Parra, X. Wu, A. Guntoro, and A. Kumar, “Knowledge distillation and gradient estimation for active error compensation in approximate neural networks,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2021.
  9. S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” International Conference on Learning Representations (ICLR), 2016.
  10. M. Alizadeh, S. A. Tailor, L. M. Zintgraf, J. van Amersfoort, S. Farquhar, N. D. Lane, and Y. Gal, “Prospect Pruning: Finding trainable weights at initialization using meta-gradients,” in International Conference on Learning Representations (ICLR), 2022.
  11. C. Louizos, M. Welling, and D. P. Kingma, “Learning sparse neural networks through l⁢_⁢0𝑙_0l\_0italic_l _ 0 regularization,” International Conference on Learning Representations (ICLR), 2018.
  12. C. Wang, G. Zhang, and R. Grosse, “Picking winning tickets before training by preserving gradient flow,” in International Conference on Learning Representations (ICLR), 2020.
  13. G. Fang, X. Ma, M. Song, M. B. Mi, and X. Wang, “DepGraph: Towards any structural pruning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  14. S. Elkerdawy, M. Elhoushi, A. Singh, H. Zhang, and N. Ray, “To filter prune, or to layer prune, that is the question,” in Proceedings of the Asian Conference on Computer Vision (ACCV), 2020.
  15. Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. Feris, “Blockdrop: Dynamic inference paths in residual networks,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  16. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  17. M. Lin, L. Cao, Y. Zhang, L. Shao, C.-W. Lin, and R. Ji, “Pruning networks with cross-layer ranking & k-reciprocal nearest filters,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  18. H. Wang and Y. Fu, “Trainability preserving neural pruning,” in International Conference on Learning Representations (ICLR), 2023.
  19. M. Lin, R. Ji, Y. Wang, Y. Zhang, B. Zhang, Y. Tian, and L. Shao, “HRank: Filter pruning using high-rank feature map,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  20. Y. Sui, M. Yin, Y. Xie, H. Phan, S. Aliari Zonouz, and B. Yuan, “CHIP: Channel independence-based pruning for compact neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2021.
  21. A. Khakzar, S. Baselizadeh, S. Khanduja, C. Rupprecht, S. T. Kim, and N. Navab, “Neural response interpretation through the lens of critical pathways,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  22. M. Kang and B. Han, “Operation-aware soft channel pruning using differentiable masks,” in International Conference on Machine Learning (ICML), 2020.
  23. H. Li, A. Kadav, I. Durdanovic et al., “Pruning filters for efficient convnets,” in International Conference on Learning Representations (ICLR), 2017.
  24. H. Hu, R. Peng, Y.-W. Tai, and C.-K. Tang, “Network trimming: A data-driven neuron pruning approach towards efficient deep architectures,” arXiv preprint arXiv:1607.03250, 2016.
  25. P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” in International Conference on Learning Representations (ICLR), 2017.
  26. J. Zhang, H. Gu, G. L. Zhang, B. Li, and U. Schlichtmann, “Hardware-software codesign of weight reshaping and systolic array multiplexing for efficient cnns,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2021.
  27. Z. Huang and N. Wang, “Data-driven sparse structure selection for deep neural networks,” in European Conference on Computer Vision (ECCV), 2018.
  28. P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz, “Importance estimation for neural network pruning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  29. Z. You, K. Yan, J. Ye, M. Ma, and P. Wang, “Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2019.
  30. Y. Wang, H. Su, B. Zhang, and X. Hu, “Interpret neural networks by identifying critical data routing paths,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  31. J. Wang, Y. Chen, R. Chakraborty, and S. X. Yu, “Orthogonal convolutional neural networks,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Mengnan Jiang (1 paper)
  2. Jingcun Wang (3 papers)
  3. Amro Eldebiky (6 papers)
  4. Xunzhao Yin (35 papers)
  5. Cheng Zhuo (47 papers)
  6. Ing-Chao Lin (6 papers)
  7. Grace Li Zhang (27 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.