Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM (1905.00789v1)

Published 2 May 2019 in cs.LG, cs.CV, and stat.ML

Abstract: Weight quantization is one of the most important techniques of Deep Neural Networks (DNNs) model compression method. A recent work using systematic framework of DNN weight quantization with the advanced optimization algorithm ADMM (Alternating Direction Methods of Multipliers) achieves one of state-of-art results in weight quantization. In this work, we first extend such ADMM-based framework to guarantee solution feasibility and we have further developed a multi-step, progressive DNN weight quantization framework, with dual benefits of (i) achieving further weight quantization thanks to the special property of ADMM regularization, and (ii) reducing the search space within each step. Extensive experimental results demonstrate the superior performance compared with prior work. Some highlights: we derive the first lossless and fully binarized (for all layers) LeNet-5 for MNIST; And we derive the first fully binarized (for all layers) VGG-16 for CIFAR-10 and ResNet for ImageNet with reasonable accuracy loss.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sheng Lin (29 papers)
  2. Xiaolong Ma (57 papers)
  3. Shaokai Ye (20 papers)
  4. Geng Yuan (58 papers)
  5. Kaisheng Ma (46 papers)
  6. Yanzhi Wang (197 papers)
Citations (9)