Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study of LLaMA3 Quantization: From LLMs to MLLMs (2404.14047v2)

Published 22 Apr 2024 in cs.LG
An Empirical Study of LLaMA3 Quantization: From LLMs to MLLMs

Abstract: The LLaMA family has become one of the most powerful open-source LLMs and the popular LLM backbones of Multimodal LLMs (MLLMs), widely applied in Computer Vision (CV) and Natural Language Understanding (NLU) tasks. Notably, LLaMA3 models have recently been released and achieve impressive performance across various with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-limited scenarios, we explore LLaMA3's capabilities when quantized to low bit-width. This exploration can potentially unveil new insights and challenges for low-bit quantization of LLaMA3 and other forthcoming LLMs, especially in addressing performance degradation problems that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA-finetuning methods of LLaMA3 on 1-8 bits and diverse datasets to reveal LLaMA3's low-bit quantization performance. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers non-negligent degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap under low bit-width that needs to be bridged in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.

Performance Evaluation of Low-Bit Quantization on Meta's LLaMA3 Models

Introduction

Meta's LLaMA3 models, introduced in April 2024, represent a significant advancement in the field of LLMs, boasting configurations of up to 70 billion parameters and extensive pre-training on over 15 trillion tokens. Despite their superior performance across various benchmarks, the real-world application of LLaMA3 models is often restricted by resource limitations, prompting a closer examination of low-bit quantization methods as a viable solution for compression. This paper evaluates the effectiveness of different low-bit quantization techniques, both post-training and during fine-tuning, to maintain the operational integrity of LLaMA3 models under resource constraints.

Quantization Techniques Evaluated

The paper categorizes the quantization methods into two main tracks:

  1. Post-Training Quantization (PTQ)
    • Techniques such as RTN, GPTQ, AWQ, and SmoothQuant were tested across a bit-width spectrum from 1 to 8 bits.
    • Notable methods like PB-LLM and DB-LLM employ strategies for effective compression at ultra-low bit-widths, revealing promising capabilities in maintaining performance integrity.
  2. LoRA-Finetuning (LoRA-FT) Quantization
    • Focused on the newer methods such as QLoRA and IR-QLoRA, the paper explores adaptations of the model parameters during fine-tuning to achieve better quantization outcomes.
    • These methods were primarily evaluated on the MMLU benchmark and additional CommonSenseQA tasks to assess their capacity to handle lower-bit operational demands.

Experimental Results

  • PTQ Evaluation: A comprehensive assessment using various benchmarks showed that while some methods like GPTQ and AWQ can maintain reasonable model performance down to 3-bit quantization, nearly all techniques faced severe degradation at ultra-low bit-widths (1-2 bits). However, specialized methods like PB-LLM introduced mixed-precision strategies that somewhat mitigated performance drops.
  • LoRA-FT Evaluation: The findings indicate that LoRA-FT methods did not substantially improve the performance outcomes for the LLaMA3 model, especially when compared against their non-fine-tuned counterparts. In some cases, these methods performed worse, underscoring the challenges of applying low-rank adjustments to highly optimized models.

Implications and Future Directions

The observed performance degradation in low-bit scenarios highlights a critical challenge for deploying LLaMA3 in resource-limited environments. This issue prompts further research into developing more robust quantization techniques that can effectively bridge the performance gap identified in this paper. Future advancements might focus on:

  • Enhancing PTQ methods to support lower-bit operations without a substantial loss in accuracy.
  • Innovating LoRA-FT approaches that can leverage the intrinsic capacity of LLaMA3 models more effectively, perhaps through more sophisticated parameter optimization or adjustment techniques.

Ultimately, by improving the efficacy of these quantization methods, LLMs like LLaMA3 could be deployed more widely, extending their utility to a variety of applications where computational resources are a limiting factor.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 7432–7439, 2020.
  2. Quip: 2-bit quantization of large language models with guarantees. Advances in Neural Information Processing Systems, 36, 2024.
  3. Db-llm: Accurate dual-binarization for efficient llms. arXiv preprint arXiv:2402.11960, 2024.
  4. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
  5. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
  6. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323, 2022.
  7. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020.
  8. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2021.
  9. Billm: Pushing the limit of post-training quantization for llms. arXiv preprint arXiv:2402.04291, 2024.
  10. Awq: Activation-aware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978, 2023.
  11. The penn treebank: Annotating predicate argument structure. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994, 1994.
  12. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
  13. Accurate lora-finetuning quantization of llms via information retention. arXiv preprint arXiv:2402.05445, 2024.
  14. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020.
  15. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106, 2021.
  16. Pb-llm: Partially binarized large language models. arXiv preprint arXiv:2310.00034, 2023.
  17. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
  18. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  19. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  20. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning, pages 38087–38099. PMLR, 2023.
  21. Qa-lora: Quantization-aware low-rank adaptation of large language models. arXiv preprint arXiv:2309.14717, 2023.
  22. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Wei Huang (318 papers)
  2. Xudong Ma (26 papers)
  3. Haotong Qin (60 papers)
  4. Xingyu Zheng (10 papers)
  5. Chengtao Lv (7 papers)
  6. Hong Chen (230 papers)
  7. Jie Luo (100 papers)
  8. Xiaojuan Qi (133 papers)
  9. Xianglong Liu (128 papers)
  10. Michele Magno (118 papers)
Citations (21)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com