Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comprehensive Survey of Model Compression and Speed up for Vision Transformers (2404.10407v1)

Published 16 Apr 2024 in cs.CV

Abstract: Vision Transformers (ViT) have marked a paradigm shift in computer vision, outperforming state-of-the-art models across diverse tasks. However, their practical deployment is hampered by high computational and memory demands. This study addresses the challenge by evaluating four primary model compression techniques: quantization, low-rank approximation, knowledge distillation, and pruning. We methodically analyze and compare the efficacy of these techniques and their combinations in optimizing ViTs for resource-constrained environments. Our comprehensive experimental evaluation demonstrates that these methods facilitate a balanced compromise between model accuracy and computational efficiency, paving the way for wider application in edge computing devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Feiyang Chen (18 papers)
  2. Ziqian Luo (6 papers)
  3. Lisang Zhou (4 papers)
  4. Xueting Pan (3 papers)
  5. Ying Jiang (70 papers)
Citations (16)
X Twitter Logo Streamline Icon: https://streamlinehq.com