Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 69 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

BiSup: Bidirectional Quantization Error Suppression for Large Language Models (2405.15346v1)

Published 24 May 2024 in cs.CL, cs.AI, and cs.LG

Abstract: As the size and context length of LLMs grow, weight-activation quantization has emerged as a crucial technique for efficient deployment of LLMs. Compared to weight-only quantization, weight-activation quantization presents greater challenges due to the presence of outliers in activations. Existing methods have made significant progress by exploring mixed-precision quantization and outlier suppression. However, these methods primarily focus on optimizing the results of single matrix multiplication, neglecting the bidirectional propagation of quantization errors in LLMs. Specifically, errors accumulate vertically within the same token through layers, and diffuse horizontally across different tokens due to self-attention mechanisms. To address this issue, we introduce BiSup, a Bidirectional quantization error Suppression method. By constructing appropriate optimizable parameter spaces, BiSup utilizes a small amount of data for quantization-aware parameter-efficient fine-tuning to suppress the error vertical accumulation. Besides, BiSup employs prompt mixed-precision quantization strategy, which preserves high precision for the key-value cache of system prompts, to mitigate the error horizontal diffusion. Extensive experiments on Llama and Qwen families demonstrate that BiSup can improve performance over two state-of-the-art methods (the average WikiText2 perplexity decreases from 13.26 to 9.41 for Atom and from 14.33 to 7.85 for QuaRot under the W3A3-g128 configuration), further facilitating the practical applications of low-bit weight-activation quantization.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Quik: Towards end-to-end 4-bit inference on generative large language models. arXiv preprint arXiv:2310.09259.
  2. Quarot: Outlier-free 4-bit inference in rotated llms. arXiv preprint arXiv:2404.00456.
  3. Qwen technical report. arXiv preprint arXiv:2309.16609.
  4. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 7432–7439.
  5. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044.
  6. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
  7. Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. Advances in Neural Information Processing Systems, 35:30318–30332.
  8. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36.
  9. Cbq: Cross-block quantization for large language models. arXiv preprint arXiv:2312.07950.
  10. Optq: Accurate quantization for generative pre-trained transformers. In The Eleventh International Conference on Learning Representations.
  11. Olive: Accelerating large language models via hardware-friendly outlier-victim pair quantization. In Proceedings of the 50th Annual International Symposium on Computer Architecture, pages 1–15.
  12. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
  13. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2704–2713.
  14. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles, pages 611–626.
  15. Norm tweaking: High-performance low-bit quantization of large language models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18536–18544.
  16. Loftq: Lora-fine-tuning-aware quantization for large language models. arXiv preprint arXiv:2310.08659.
  17. Baohao Liao and Christof Monz. 2024. Apiq: Finetuning of 2-bit quantized large language model. arXiv preprint arXiv:2402.05147.
  18. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978.
  19. Llm-qat: Data-free quantization aware training for large language models. arXiv preprint arXiv:2305.17888.
  20. Affinequant: Affine transformation quantization for large language models. arXiv preprint arXiv:2403.12544.
  21. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
  22. A white paper on neural network quantization. arXiv preprint arXiv:2106.08295.
  23. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
  24. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21:1–67.
  25. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106.
  26. Omniquant: Omnidirectionally calibrated quantization for large language models. arXiv preprint arXiv:2308.13137.
  27. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  28. Model compression and efficient inference for large language models: A survey. arXiv preprint arXiv:2402.09748.
  29. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
  30. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning, pages 38087–38099. PMLR.
  31. No token left behind: Reliable kv cache compression via importance-aware mixed precision quantization. arXiv preprint arXiv:2402.18096.
  32. Exploring post-training quantization in llms from comprehensive study to low rank compensation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 19377–19385.
  33. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830.
  34. Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32.
  35. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792.
  36. A survey of large language models. arXiv preprint arXiv:2303.18223.
  37. Atom: Low-bit quantization for efficient and accurate llm serving. arXiv preprint arXiv:2310.19102.
  38. A survey on model compression for large language models. arXiv preprint arXiv:2308.07633.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.