SNP: Structured Neuron-level Pruning to Preserve Attention Scores (2404.11630v1)
Abstract: Multi-head self-attention (MSA) is a key component of Vision Transformers (ViTs), which have achieved great success in various vision tasks. However, their high computational cost and memory footprint hinder their deployment on resource-constrained devices. Conventional pruning approaches can only compress and accelerate the MSA module using head pruning, although the head is not an atomic unit. To address this issue, we propose a novel graph-aware neuron-level pruning method, Structured Neuron-level Pruning (SNP). SNP prunes neurons with less informative attention scores and eliminates redundancy among heads. Specifically, it prunes graphically connected query and key layers having the least informative attention scores while preserving the overall attention scores. Value layers, which can be pruned independently, are pruned to eliminate inter-head redundancy. Our proposed method effectively compresses and accelerates Transformer-based models for both edge devices and server processors. For instance, the DeiT-Small with SNP runs 3.1$\times$ faster than the original model and achieves performance that is 21.94\% faster and 1.12\% higher than the DeiT-Tiny. Additionally, SNP combine successfully with conventional head or block pruning approaches. SNP with head pruning could compress the DeiT-Base by 80\% of the parameters and computational costs and achieve 3.85$\times$ faster inference speed on RTX3090 and 4.93$\times$ on Jetson Nano.
- Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928, 2020.
- On attention redundancy: A comprehensive study. In Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: human language technologies, pages 930–945, 2021.
- Token merging: Your vit but faster. arXiv preprint arXiv:2210.09461, 2022.
- Chasing sparsity in vision transformers: An end-to-end exploration. Advances in Neural Information Processing Systems, 34:19974–19988, 2021.
- J Demouth. Sparse matrix-matrix multiplication on the gpu. Technical report, NVIDIA, 2012.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Depgraph: Towards any structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16091–16101, 2023.
- Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision, pages 1389–1397, 2017.
- Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4340–4349, 2019.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Layer-adaptive sparsity for the magnitude-based pruning. arXiv preprint arXiv:2010.07611, 2020.
- Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
- Efficientformer: Vision transformers at mobilenet speed. Advances in Neural Information Processing Systems, 35:12934–12949, 2022.
- Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021a.
- Post-training quantization for vision transformer. Advances in Neural Information Processing Systems, 34:28092–28103, 2021b.
- Are sixteen heads really better than one? Advances in neural information processing systems, 32, 2019.
- Accelerating sparse deep neural networks. arXiv preprint arXiv:2104.08378, 2021.
- Designing network design spaces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10428–10436, 2020.
- Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
- Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pages 10347–10357. PMLR, 2021.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Upscale: unconstrained channel pruning. In International Conference on Machine Learning, pages 35384–35412. PMLR, 2023.
- Ziheng Wang. Sparsednn: Fast sparse deep learning inference on cpus. arXiv preprint arXiv:2101.07948, 2021.
- Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687–10698, 2020.
- Width & depth pruning for vision transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3143–3151, 2022a.
- X-pruner: explainable pruning for vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24355–24363, 2023.
- Unified visual transformer compression. arXiv preprint arXiv:2203.08243, 2022b.
- Topology-aware network pruning using multi-stage graph embedding and reinforcement learning. In International conference on machine learning, pages 25656–25667. PMLR, 2022c.
- The combinatorial brain surgeon: pruning weights that cancel one another in neural networks. In International Conference on Machine Learning, pages 25668–25683. PMLR, 2022d.
- Kyunghwan Shim (1 paper)
- Jaewoong Yun (5 papers)
- Shinkook Choi (9 papers)