Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaled Quantization for the Vision Transformer (2303.13601v1)

Published 23 Mar 2023 in eess.IV, cs.AR, and cs.CV

Abstract: Quantization using a small number of bits shows promise for reducing latency and memory usage in deep neural networks. However, most quantization methods cannot readily handle complicated functions such as exponential and square root, and prior approaches involve complex training processes that must interact with floating-point values. This paper proposes a robust method for the full integer quantization of vision transformer networks without requiring any intermediate floating-point computations. The quantization techniques can be applied in various hardware or software implementations, including processor/memory architectures and FPGAs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yangyang Chang (1 paper)
  2. Gerald E. Sobelman (1 paper)
Citations (1)

Summary

We haven't generated a summary for this paper yet.