Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compression of Generative Pre-trained Language Models via Quantization (2203.10705v2)

Published 21 Mar 2022 in cs.CL and cs.CV

Abstract: The increasing size of generative Pre-trained LLMs (PLMs) has greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rates on GPT-2 and BART, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Chaofan Tao (27 papers)
  2. Lu Hou (50 papers)
  3. Wei Zhang (1489 papers)
  4. Lifeng Shang (90 papers)
  5. Xin Jiang (242 papers)
  6. Qun Liu (230 papers)
  7. Ping Luo (340 papers)
  8. Ngai Wong (82 papers)
Citations (91)