Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection (2401.15293v1)

Published 27 Jan 2024 in cs.CV, cs.AI, and cs.LG

Abstract: Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining more than 13% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Speeding up resnet architecture with layers targeted low rank decomposition. arXiv preprint arXiv:2309.12412.
  2. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints. arXiv preprint arXiv:2305.13245.
  3. Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents. arXiv:2309.14615.
  4. Deep State Inference: Toward Behavioral Model Inference of Black-Box Software Systems. IEEE Transactions on Software Engineering, 48(12): 4857–4872.
  5. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255.
  6. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
  7. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  8. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  9. SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling. arXiv preprint arXiv:2311.15134.
  10. Methods, systems, and media for computer vision using 2d convolution of 4d video data tensors. US Patent App. 17/502,588.
  11. Compressing Pre-trained Language Models using Progressive Low Rank Decomposition.
  12. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
  13. GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values. arXiv preprint arXiv:2311.03426.
  14. A short study on compressing decoder-based language models. arXiv preprint arXiv:2110.08460.
  15. Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations. arXiv:2202.07800.
  16. Fixing Weight Decay Regularization in Adam. CoRR, abs/1711.05101.
  17. Dynamic spatial sparsification for efficient vision transformers and convolutional neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence.
  18. Shazeer, N. 2019. Fast Transformer Decoding: One Write-Head is All You Need. arXiv:1911.02150.
  19. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, 10347–10357. PMLR.
  20. Attention is all you need. Advances in neural information processing systems, 30.
  21. Attention Is All You Need. arXiv:1706.03762.
  22. Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers. arXiv:2211.11586.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Foozhan Ataiefard (6 papers)
  2. Walid Ahmed (13 papers)
  3. Habib Hajimolahoseini (10 papers)
  4. Saina Asani (3 papers)
  5. Farnoosh Javadi (5 papers)
  6. Mohammad Hassanpour (6 papers)
  7. Omar Mohamed Awad (8 papers)
  8. Austin Wen (6 papers)
  9. Kangling Liu (3 papers)
  10. Yang Liu (2253 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets