Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EasyQuant: Post-training Quantization via Scale Optimization (2006.16669v1)

Published 30 Jun 2020 in cs.CV, cs.LG, and eess.IV

Abstract: The 8 bits quantization has been widely applied to accelerate network inference in various deep learning applications. There are two kinds of quantization methods, training-based quantization and post-training quantization. Training-based approach suffers from a cumbersome training process, while post-training quantization may lead to unacceptable accuracy drop. In this paper, we present an efficient and simple post-training method via scale optimization, named EasyQuant (EQ),that could obtain comparable accuracy with the training-based method.Specifically, we first alternately optimize scales of weights and activations for all layers target at convolutional outputs to further obtain the high quantization precision. Then, we lower down bit width to INT7 both for weights and activations, and adopt INT16 intermediate storage and integer Winograd convolution implementation to accelerate inference.Experimental results on various computer vision tasks show that EQ outperforms the TensorRT method and can achieve near INT8 accuracy in 7 bits width post-training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Di Wu (477 papers)
  2. Qi Tang (48 papers)
  3. Yongle Zhao (4 papers)
  4. Ming Zhang (313 papers)
  5. Ying Fu (98 papers)
  6. Debing Zhang (29 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.