Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond (2008.13128v1)

Published 30 Aug 2020 in cs.LG and stat.ML

Abstract: Quantized Neural Networks (QNNs) use low bit-width fixed-point numbers for representing weight parameters and activations, and are often used in real-world applications due to their saving of computation resources and reproducibility of results. Batch Normalization (BN) poses a challenge for QNNs for requiring floating points in reciprocal operations, and previous QNNs either require computing BN at high precision or revise BN to some variants in heuristic ways. In this work, we propose a novel method to quantize BN by converting an affine transformation of two floating points to a fixed-point operation with shared quantized scale, which is friendly for hardware acceleration and model deployment. We confirm that our method maintains same outputs through rigorous theoretical analysis and numerical analysis. Accuracy and efficiency of our quantization method are verified by experiments at layer level on CIFAR and ImageNet datasets. We also believe that our method is potentially useful in other problems involving quantization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Dachao Lin (16 papers)
  2. Peiqin Sun (5 papers)
  3. Guangzeng Xie (11 papers)
  4. Shuchang Zhou (51 papers)
  5. Zhihua Zhang (118 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.