Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation (2102.07444v2)

Published 15 Feb 2021 in cs.CV

Abstract: Learning convolutional neural networks (CNNs) with low bitwidth is challenging because performance may drop significantly after quantization. Prior arts often discretize the network weights by carefully tuning hyper-parameters of quantization (e.g. non-uniform stepsize and layer-wise bitwidths), which are complicated and sub-optimal because the full-precision and low-precision models have a large discrepancy. This work presents a novel quantization pipeline, Frequency-Aware Transformation (FAT), which has several appealing benefits. (1) Rather than designing complicated quantizers like existing works, FAT learns to transform network weights in the frequency domain before quantization, making them more amenable to training in low bitwidth. (2) With FAT, CNNs can be easily trained in low precision using simple standard quantizers without tedious hyper-parameter tuning. Theoretical analysis shows that FAT improves both uniform and non-uniform quantizers. (3) FAT can be easily plugged into many CNN architectures. When training ResNet-18 and MobileNet-V2 in 4 bits, FAT plus a simple rounding operation already achieves 70.5% and 69.2% top-1 accuracy on ImageNet without bells and whistles, outperforming recent state-of-the-art by reducing 54.9X and 45.7X computations against full-precision models. We hope FAT provides a novel perspective for model quantization. Code is available at \url{https://github.com/ChaofanTao/FAT_Quantization}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chaofan Tao (27 papers)
  2. Rui Lin (36 papers)
  3. Quan Chen (91 papers)
  4. Zhaoyang Zhang (273 papers)
  5. Ping Luo (340 papers)
  6. Ngai Wong (82 papers)
Citations (7)