Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OverQ: Opportunistic Outlier Quantization for Neural Network Accelerators (1910.06909v2)

Published 13 Oct 2019 in cs.LG and stat.ML

Abstract: Outliers in weights and activations pose a key challenge for fixed-point quantization of neural networks. While they can be addressed by fine-tuning, this is not practical for ML service providers (e.g., Google or Microsoft) who often receive customer models without training data. Specialized hardware for handling activation outliers can enable low-precision neural networks, but at the cost of nontrivial area overhead. We instead propose overwrite quantization (OverQ), a lightweight hardware technique that opportunistically increases bitwidth for activation outliers by overwriting nearby zeros. It has two major modes of operation: range overwrite and precision overwrite. Range overwrite reallocates bits to increase the range of outliers, while precision overwrite reuses zeros to increase the precision of non-outlier values. Combining range overwrite with a simple cascading logic, we handle the vast majority of outliers to significantly improve model accuracy at low bitwidth. Our experiments show that with modest cascading, we can consistently handle over 90% of outliers and achieve +5% ImageNet Top-1 accuracy on a quantized ResNet-50 at 4 bits. Our ASIC prototype shows OverQ can be implemented efficiently on top of existing weight-stationary systolic arrays with small area increases per processing element. We imagine this technique can complement modern DNN accelerator designs to provide small increases in accuracy with insignificant area overhead.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ritchie Zhao (11 papers)
  2. Jordan Dotzel (13 papers)
  3. Zhanqiu Hu (4 papers)
  4. Preslav Ivanov (1 paper)
  5. Christopher De Sa (77 papers)
  6. Zhiru Zhang (51 papers)
Citations (1)