Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Quantization-Friendly Separable Convolution for MobileNets (1803.08607v3)

Published 22 Mar 2018 in cs.CV

Abstract: As deep learning (DL) is being rapidly pushed to edge computing, researchers invented various ways to make inference computation more efficient on mobile/IoT devices, such as network pruning, parameter compression, and etc. Quantization, as one of the key approaches, can effectively offload GPU, and make it possible to deploy DL on fixed-point pipeline. Unfortunately, not all existing networks design are friendly to quantization. For example, the popular lightweight MobileNetV1, while it successfully reduces parameter size and computation latency with separable convolution, our experiment shows its quantized models have large accuracy gap against its float point models. To resolve this, we analyzed the root cause of quantization loss and proposed a quantization-friendly separable convolution architecture. By evaluating the image classification task on ImageNet2012 dataset, our modified MobileNetV1 model can archive 8-bit inference top-1 accuracy in 68.03%, almost closed the gap to the float pipeline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Tao Sheng (12 papers)
  2. Chen Feng (172 papers)
  3. Shaojie Zhuo (6 papers)
  4. Xiaopeng Zhang (100 papers)
  5. Liang Shen (26 papers)
  6. Mickey Aleksic (1 paper)
Citations (110)

Summary

We haven't generated a summary for this paper yet.