Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quasar-ViT: Hardware-Oriented Quantization-Aware Architecture Search for Vision Transformers (2407.18175v1)

Published 25 Jul 2024 in cs.LG, cs.AI, and cs.CV

Abstract: Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs). However, ViT models are often computation-intensive for efficient deployment on resource-limited edge devices. This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs, to design efficient ViT models for hardware implementation while preserving the accuracy. First, Quasar-ViT trains a supernet using our row-wise flexible mixed-precision quantization scheme, mixed-precision weight entanglement, and supernet layer scaling techniques. Then, it applies an efficient hardware-oriented search algorithm, integrated with hardware latency and resource modeling, to determine a series of optimal subnets from supernet under different inference latency targets. Finally, we propose a series of model-adaptive designs on the FPGA platform to support the architecture search and mitigate the gap between the theoretical computation reduction and the practical inference speedup. Our searched models achieve 101.5, 159.6, and 251.6 frames-per-second (FPS) inference speed on the AMD/Xilinx ZCU102 FPGA with 80.4%, 78.6%, and 74.9% top-1 accuracy, respectively, for the ImageNet dataset, consistently outperforming prior works.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Zhengang Li (31 papers)
  2. Alec Lu (4 papers)
  3. Yanyue Xie (12 papers)
  4. Zhenglun Kong (33 papers)
  5. Mengshu Sun (41 papers)
  6. Hao Tang (378 papers)
  7. Zhong Jia Xue (1 paper)
  8. Peiyan Dong (18 papers)
  9. Caiwen Ding (98 papers)
  10. Yanzhi Wang (197 papers)
  11. Xue Lin (92 papers)
  12. Zhenman Fang (21 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com