Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Subtensor Quantization for Mobilenets (2011.08009v1)

Published 4 Nov 2020 in cs.CV and cs.LG

Abstract: Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference. However, not all DNN designs are friendly to quantization. For example, the popular Mobilenet architecture has been tuned to reduce parameter size and computational latency with separable depth-wise convolutions, but not all quantization algorithms work well and the accuracy can suffer against its float point versions. In this paper, we analyzed several root causes of quantization loss and proposed alternatives that do not rely on per-channel or training-aware approaches. We evaluate the image classification task on ImageNet dataset, and our post-training quantized 8-bit inference top-1 accuracy in within 0.7% of the floating point version.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Thu Dinh (6 papers)
  2. Andrey Melnikov (12 papers)
  3. Vasilios Daskalopoulos (1 paper)
  4. Sek Chai (11 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.