Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN) (1807.06964v1)

Published 17 Jul 2018 in cs.CV

Abstract: Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter $\alpha$ that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jungwook Choi (28 papers)
  2. Pierce I-Jen Chuang (4 papers)
  3. Zhuo Wang (55 papers)
  4. Swagath Venkataramani (14 papers)
  5. Vijayalakshmi Srinivasan (4 papers)
  6. Kailash Gopalakrishnan (12 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.