Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes (2405.05808v1)
Abstract: Neural network sparsity has attracted many research interests due to its similarity to biological schemes and high energy efficiency. However, existing methods depend on long-time training or fine-tuning, which prevents large-scale applications. Recently, some works focusing on post-training sparsity (PTS) have emerged. They get rid of the high training cost but usually suffer from distinct accuracy degradation due to neglect of the reasonable sparsity rate at each layer. Previous methods for finding sparsity rates mainly focus on the training-aware scenario, which usually fails to converge stably under the PTS setting with limited data and much less training cost. In this paper, we propose a fast and controllable post-training sparsity (FCPTS) framework. By incorporating a differentiable bridge function and a controllable optimization objective, our method allows for rapid and accurate sparsity allocation learning in minutes, with the added assurance of convergence to a predetermined global sparsity rate. Equipped with these techniques, we can surpass the state-of-the-art methods by a large margin, e.g., over 30\% improvement for ResNet-50 on ImageNet under the sparsity rate of 80\%. Our plug-and-play code and supplementary materials are open-sourced at https://github.com/ModelTC/FCPTS.
- Learned threshold pruning. arXiv preprint arXiv:2003.00075.
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
- Rigging the Lottery: Making All Tickets Winners. In III, H. D.; and Singh, A., eds., Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 2943–2952. PMLR.
- The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis., 88(2): 303–338.
- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations.
- The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574.
- Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks. In The IEEE International Conference on Computer Vision (ICCV).
- Combating adversarial attacks using sparse representations. arXiv preprint arXiv:1803.03880.
- Multidimensional Pruning and Its Extension: A Unified Framework for Model Compression. IEEE Transactions on Neural Networks and Learning Systems.
- Model compression using progressive channel pruning. IEEE Transactions on Circuits and Systems for Video Technology.
- Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28.
- Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
- Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. arXiv:2102.00554 [cs]. ArXiv: 2102.00554.
- : Channel Pruning via Class-Aware Trace Ratio Optimization. IEEE Transactions on Neural Networks and Learning Systems.
- CP3: Channel Pruning Plug-In for Point-Based Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5302–5312.
- Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2704–2713.
- Learning multiple layers of features from tiny images.
- Soft Threshold Weight Reparameterization for Learnable Sparsity. In Proceedings of the International Conference on Machine Learning.
- Post-training deep neural network pruning via layer-wise calibration. ArXiv:2104.15023 [cs].
- Layer-adaptive Sparsity for the Magnitude-based Pruning. In International Conference on Learning Representations.
- {BRECQ}: Pushing the Limit of Post-Training Quantization by Block Reconstruction. In International Conference on Learning Representations.
- MQBench: Towards Reproducible and Deployable Model Quantization Benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
- Dynamic Model Pruning with Feedback. In International Conference on Learning Representations.
- Autocompress: An automatic dnn structured pruning framework for ultra-high compression rates. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 4876–4883.
- Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? In International Conference on Machine Learning, 7011–7020. PMLR.
- SSD: Single Shot MultiBox Detector. In ECCV.
- Variational dropout sparsifies deep neural networks. In International Conference on Machine Learning, 2498–2507. PMLR.
- Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In Chaudhuri, K.; and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 4646–4655. PMLR.
- Exploring Sparsity in Recurrent Neural Networks. In International Conference on Learning Representations.
- Designing network design spaces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10428–10436.
- Comparing fine-tuning and rewinding in neural network pruning. In International Conference on Learning Representations.
- Imagenet large scale visual recognition challenge. International journal of computer vision, 115: 211–252.
- MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Ström, N. 1997. Sparse connection and pruning in large dynamic artificial neural networks. In Fifth European Conference on Speech Communication and Technology. Citeseer.
- Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2820–2828.
- Evaluating pruning methods. In Proceedings of the International Symposium on Artificial neural networks, 20–25.
- QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization. In International Conference on Learning Representations.
- Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models. In Thirty-Sixth Conference on Neural Information Processing Systems.
- Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems, 34: 20838–20850.
- Effective sparsification of neural networks with global sparsity constraint. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3599–3608.
- Towards Unified INT8 Training for Convolutional Neural Network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.