Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling (2304.09145v3)

Published 18 Apr 2023 in cs.CL

Abstract: Post-training quantization~(PTQ) of transformer LLMs faces significant challenges due to the existence of detrimental outliers in activations. We observe that these outliers are concentrated in specific channels and are asymmetric across channels. To address this issue, we propose the Outlier Suppression+~(OS+) framework, which contains the channel-wise shifting for asymmetry and channel-wise scaling for concentration. We show that these operations can be seamlessly migrated into subsequent modules while maintaining equivalence. Second, we propose a fast and stable scheme to calculate effective shifting and scaling values. The channel-wise shifting aligns the center of each channel for removal of outlier asymmetry. The channel-wise scaling quantitatively evaluates changes brought by migration and quantization for better quantization burden balance. We validate our OS+ under both standard and fine-grained quantization settings with models including BERT, OPT, BLOOM, BLOOMZ, and LLaMA. Comprehensive results across various tasks demonstrate the superiority of our approach. Especially, with standard quantization, OS+ can achieve near-floating-point performance on both small models and LLMs on 8-bit and 6-bit. Besides, we establish a new state-of-the-art for 4-bit BERT with 15.5\% improvement. Our code is available at \url{https://github.com/ModelTC/Outlier_Suppression_Plus}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiuying Wei (10 papers)
  2. Yunchen Zhang (14 papers)
  3. Yuhang Li (102 papers)
  4. Xiangguo Zhang (6 papers)
  5. Ruihao Gong (40 papers)
  6. Jinyang Guo (28 papers)
  7. Xianglong Liu (128 papers)
Citations (19)