Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MetaGrad: Adaptive Gradient Quantization with Hypernetworks (2303.02347v2)

Published 4 Mar 2023 in cs.CV

Abstract: A popular track of network compression approach is Quantization aware Training (QAT), which accelerates the forward pass during the neural network training and inference. However, not much prior efforts have been made to quantize and accelerate the backward pass during training, even though that contributes around half of the training time. This can be partly attributed to the fact that errors of low-precision gradients during backward cannot be amortized by the training objective as in the QAT setting. In this work, we propose to solve this problem by incorporating the gradients into the computation graph of the next training iteration via a hypernetwork. Various experiments on CIFAR-10 dataset with different CNN network architectures demonstrate that our hypernetwork-based approach can effectively reduce the negative effect of gradient quantization noise and successfully quantizes the gradients to INT4 with only 0.64 accuracy drop for VGG-16 on CIFAR-10.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kaixin Xu (15 papers)
  2. Alina Hui Xiu Lee (1 paper)
  3. Ziyuan Zhao (32 papers)
  4. Zhe Wang (574 papers)
  5. Min Wu (201 papers)
  6. Weisi Lin (118 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.