Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines (1805.07941v1)

Published 21 May 2018 in cs.LG, cs.AI, cs.CV, and stat.ML

Abstract: Deep learning as a means to inferencing has proliferated thanks to its versatility and ability to approach or exceed human-level accuracy. These computational models have seemingly insatiable appetites for computational resources not only while training, but also when deployed at scales ranging from data centers all the way down to embedded devices. As such, increasing consideration is being made to maximize the computational efficiency given limited hardware and energy resources and, as a result, inferencing with reduced precision has emerged as a viable alternative to the IEEE 754 Standard for Floating-Point Arithmetic. We propose a quantization scheme that allows inferencing to be carried out using arithmetic that is fundamentally more efficient when compared to even half-precision floating-point. Our quantization procedure is significant in that we determine our quantization scheme parameters by calibrating against its reference floating-point model using a single inference batch rather than (re)training and achieve end-to-end post quantization accuracies comparable to the reference model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Sean O. Settle (1 paper)
  2. Manasa Bollavaram (2 papers)
  3. Paolo D'Alberto (10 papers)
  4. Elliott Delaye (2 papers)
  5. Oscar Fernandez (1 paper)
  6. Nicholas Fraser (11 papers)
  7. Aaron Ng (1 paper)
  8. Ashish Sirasao (9 papers)
  9. Michael Wu (10 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.