Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MKQ-BERT: Quantized BERT with 4-bits Weights and Activations (2203.13483v1)

Published 25 Mar 2022 in cs.LG

Abstract: Recently, pre-trained Transformer based LLMs, such as BERT, have shown great superiority over the traditional methods in many NLP tasks. However, the computational cost for deploying these models is prohibitive on resource-restricted devices. One method to alleviate this computation overhead is to quantize the original model into fewer bits representation, and previous work has proved that we can at most quantize both weights and activations of BERT into 8-bits, without degrading its performance. In this work, we propose MKQ-BERT, which further improves the compression level and uses 4-bits for quantization. In MKQ-BERT, we propose a novel way for computing the gradient of the quantization scale, combined with an advanced distillation strategy. On the one hand, we prove that MKQ-BERT outperforms the existing BERT quantization methods for achieving a higher accuracy under the same compression level. On the other hand, we are the first work that successfully deploys the 4-bits BERT and achieves an end-to-end speedup for inference. Our results suggest that we could achieve 5.3x of bits reduction without degrading the model accuracy, and the inference speed of one int4 layer is 15x faster than a float32 layer in Transformer based model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hanlin Tang (34 papers)
  2. Xipeng Zhang (4 papers)
  3. Kai Liu (391 papers)
  4. Jianchen Zhu (14 papers)
  5. Zhanhui Kang (45 papers)
Citations (12)