Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation (2402.10631v1)

Published 16 Feb 2024 in cs.CL

Abstract: The upscaling of LLMs has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework that synergizes Quantization-Aware Training (QAT) with Knowledge Distillation (KD) to boost the performance of LLMs at ultra-low precisions (sub-4-bit). Specifically, BitDistiller first incorporates a tailored asymmetric quantization and clipping technique to maximally preserve the fidelity of quantized weights, and then proposes a novel Confidence-Aware Kullback-Leibler Divergence (CAKLD) objective, which is employed in a self-distillation manner to enable faster convergence and superior model performance. Empirical evaluations demonstrate that BitDistiller significantly surpasses existing methods in both 3-bit and 2-bit configurations on general language understanding and complex reasoning benchmarks. Notably, BitDistiller is shown to be more cost-effective, demanding fewer data and training resources. The code is available at https://github.com/DD-DuDa/BitDistiller.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dayou Du (11 papers)
  2. Yijia Zhang (24 papers)
  3. Shijie Cao (20 papers)
  4. Jiaqi Guo (28 papers)
  5. Ting Cao (100 papers)
  6. Xiaowen Chu (108 papers)
  7. Ningyi Xu (16 papers)
Citations (13)
X Twitter Logo Streamline Icon: https://streamlinehq.com