Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs (2405.03103v2)
Abstract: The increasing size of LLMs traditionally requires low-precision integer formats to meet strict latency and power demands. Yet recently, alternative formats such as Normal Float (NF4) have increased model accuracy at the cost of increased chip area. In this work, we first conduct a large-scale analysis of LLM weights and activations across 30 networks and conclude that most distributions follow a Student's t-distribution. We then derive a new theoretically optimal format, Student Float (SF4), that improves over NF4 across modern LLMs, for example increasing the average accuracy on LLaMA2-7B by 0.76% across tasks. Using this format as a high-accuracy reference, we then propose augmenting E2M1 with two variants of supernormal support for higher model accuracy. Finally, we explore the quality and efficiency frontier across 11 datatypes by evaluating their model accuracy and hardware complexity. We discover a Pareto curve composed of INT4, E2M1, and E2M1 with supernormal support, which offers a continuous tradeoff between model accuracy and chip area. For example, E2M1 with supernormal support increases the accuracy of Phi-2 by up to 2.19% with 1.22% area overhead, enabling more LLM-based applications to be run at four bits. The supporting code is hosted at https://github.com/cornell-zhang/LLM-datatypes.
- Number systems for deep neural network architectures: A survey. arxiv, 2023.
- Aciq: analytical clipping for integer quantization of neural networks. arxiv, 2018.
- Piqa: Reasoning about physical commonsense in natural language. Conf. on Artificial Intell., 2020.
- Zeroq: A novel zero shot quantization framework. CVPR, 2020.
- Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices. Journal of Emerging and Selected Topics in Circuits and Systems, 2019.
- Boolq: Exploring the surprising difficulty of natural yes/no questions. North. American. Assoc. for Comp. Linguistics, 2019.
- Vs-quant: Per-vector scaled quantization for accurate low-precision neural network inference. MLSys, 2021.
- Llm.int8(): 8-bit matrix multiplication for transformers at scale. NeurIPS, 2022a.
- 8-bit optimizers via block-wise quantization. ICLR, 2022b.
- Qlora: Efficient finetuning of quantized llms. NeurIPS, 2023.
- Bert: Pre-training of deep bidirectional transformers for language understanding. North. American. Assoc. for Comp. Linguistics, 2019.
- Fliqs: One-shot mixed-precision floating-point and integer quantization search. AutoML, 2024.
- Gptq: Accurate post-training quantization for generative pre-trained transformers. NeurIPS, 2023.
- Ant: Exploiting adaptive numerical data type for low-bit deep neural network quantization. Int. Symp. on Micro Arch., 2022.
- Enabling design methodologies and future trends for edge ai: Specialization and codesign. IEEE Design & Test, 2021.
- Deep residual learning for image recognition. CVPR, 2015.
- Densely connected convolutional networks. CVPR, 2017.
- Mistral 7b. arxiv, 2023.
- Ten lessons from three generations shaped google’s tpuv4i : Industrial product. Int. Conf. on Computer Arch., 2021.
- Tpu v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings. Int. Conf. on Computer Arch., 2023.
- Lambada: Backward chaining for automated reasoning in natural language. Assoc. for Comp. Linguistics, 2023.
- Fp8 quantization: The power of the exponent. NeurIPS, 2022.
- Pruning vs quantization: Which is better? NeurIPS, 2023.
- Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks. ICLR, 2020.
- Textbooks are all you need ii: phi-1.5 technical report. arxiv, 2023.
- Qllm: Accurate and efficient low-bitwidth quantization for large language models. arxiv, 2023.
- Fp8 formats for deep learning. arXiv preprint arXiv:2209.05433, 2022.
- Convolutional neural networks using logarithmic data representation. arxiv, 2016.
- The conceptarc benchmark: Evaluating understanding and generalization in the arc domain. Trans. on Machine Learning, 2023.
- Data-free quantization through weight equalization and bias correction. ICCV, 2019.
- Rouhani, B. Next-generation narrow precision data formats for ai. online, 2023. URL https://www.opencompute.org/blog/amd-arm-intel-meta-...
- With shared microexponents, a little shifting goes a long way. Int. Conf. on Computer Arch., 2023.
- Winogrande: An adversarial winograd schema challenge at scale. arxiv, 2019.
- Bloom: A 176b-parameter open-access multilingual language model. arxiv, 2023.
- Omniquant: Omnidirectionally calibrated quantization for large language models. arxiv, 2023.
- Efficient post-training quantization with fp8 formats. arxiv, 2023.
- Llama 2: Open foundation and fine-tuned chat models. arxiv, 2023.
- Finetuned language models are zero-shot learners. ICLR, 2022.
- Transformers: State-of-the-art natural language processing. Association for Computational Linguistics, 2020.
- Smoothquant: Accurate and efficient post-training quantization for large language models. ICML, 2023.
- Hellaswag: Can a machine really finish your sentence? Assoc. for Comp. Linguistics, 2019.
- Opt: Open pre-trained transformer language models. arxiv, 2022.
- Binarized neural machine translation. NeurIPS, 2023a.
- Integer or floating point? new outlooks for low-bit quantization on large language models. arxiv, 2023b.
- Improving Neural Network Quantization without Retraining using Outlier Channel Splitting. ICML, June 2019.
- Atom: Low-bit quantization for efficient and accurate llm serving. arxiv, 2023.
- Jordan Dotzel (13 papers)
- Yuzong Chen (9 papers)
- Bahaa Kotb (2 papers)
- Sushma Prasad (1 paper)
- Gang Wu (143 papers)
- Sheng Li (217 papers)
- Mohamed S. Abdelfattah (37 papers)
- Zhiru Zhang (51 papers)