Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models (2305.17888v1)

Published 29 May 2023 in cs.CL

Abstract: Several post-training quantization methods have been applied to LLMs, and have been shown to perform well down to 8-bits. We find that these methods break down at lower bit precision, and investigate quantization aware training for LLMs (LLM-QAT) to push quantization levels even further. We propose a data-free distillation method that leverages generations produced by the pre-trained model, which better preserves the original output distribution and allows quantizing any generative model independent of its training data, similar to post-training quantization methods. In addition to quantizing weights and activations, we also quantize the KV cache, which is critical for increasing throughput and support long sequence dependencies at current model sizes. We experiment with LLaMA models of sizes 7B, 13B, and 30B, at quantization levels down to 4-bits. We observe large improvements over training-free methods, especially in the low-bit settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Zechun Liu (48 papers)
  2. Barlas Oguz (36 papers)
  3. Changsheng Zhao (17 papers)
  4. Ernie Chang (33 papers)
  5. Pierre Stock (19 papers)
  6. Yashar Mehdad (37 papers)
  7. Yangyang Shi (53 papers)
  8. Raghuraman Krishnamoorthi (29 papers)
  9. Vikas Chandra (74 papers)
Citations (144)
X Twitter Logo Streamline Icon: https://streamlinehq.com