Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPViT: Enabling Faster Vision Transformers via Soft Token Pruning (2112.13890v2)

Published 27 Dec 2021 in cs.CV, cs.AI, cs.AR, and cs.LG

Abstract: Recently, Vision Transformer (ViT) has continuously established new milestones in the computer vision field, while the high computation and memory cost makes its propagation in industrial production difficult. Pruning, a traditional model compression paradigm for hardware efficiency, has been widely applied in various DNN structures. Nevertheless, it stays ambiguous on how to perform exclusive pruning on the ViT structure. Considering three key points: the structural characteristics, the internal data pattern of ViTs, and the related edge device deployment, we leverage the input token sparsity and propose a computation-aware soft pruning framework, which can be set up on vanilla Transformers of both flatten and CNN-type structures, such as Pooling-based ViT (PiT). More concretely, we design a dynamic attention-based multi-head token selector, which is a lightweight module for adaptive instance-wise token selection. We further introduce a soft pruning technique, which integrates the less informative tokens generated by the selector module into a package token that will participate in subsequent calculations rather than being completely discarded. Our framework is bound to the trade-off between accuracy and computation constraints of specific edge devices through our proposed computation-aware training strategy. Experimental results show that our framework significantly reduces the computation cost of ViTs while maintaining comparable performance on image classification. Moreover, our framework can guarantee the identified model to meet resource specifications of mobile devices and FPGA, and even achieve the real-time execution of DeiT-T on mobile platforms. For example, our method reduces the latency of DeiT-T to 26 ms (26%$\sim $41% superior to existing works) on the mobile device with 0.25%$\sim $4% higher top-1 accuracy on ImageNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Zhenglun Kong (33 papers)
  2. Peiyan Dong (18 papers)
  3. Xiaolong Ma (57 papers)
  4. Xin Meng (37 papers)
  5. Mengshu Sun (41 papers)
  6. Wei Niu (68 papers)
  7. Xuan Shen (29 papers)
  8. Geng Yuan (58 papers)
  9. Bin Ren (136 papers)
  10. Minghai Qin (28 papers)
  11. Hao Tang (378 papers)
  12. Yanzhi Wang (197 papers)
Citations (111)