Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge (2312.05693v1)

Published 9 Dec 2023 in cs.LG, cs.AI, and cs.CL

Abstract: LLMs stand out for their impressive performance in intricate LLMing tasks. However, their demanding computational and memory needs pose obstacles for broad use on edge devices. Quantization is then introduced to boost LLMs' on-device efficiency. Recent works show that 8-bit or lower weight quantization is feasible with minimal impact on end-to-end task performance, while the activation is still not quantized. On the other hand, mainstream commodity edge devices still struggle to execute these sub-8-bit quantized networks effectively. In this paper, we propose Agile-Quant, an activation-guided quantization framework for popular LLMs, and implement an end-to-end accelerator on multiple edge devices for faster inference. Considering the hardware profiling and activation analysis, we first introduce a basic activation quantization strategy to balance the trade-off of task performance and real inference speed. Then we leverage the activation-aware token pruning technique to reduce the outliers and the adverse impact on attentivity. Ultimately, we utilize the SIMD-based 4-bit multiplier and our efficient TRIP matrix multiplication to implement the accelerator for LLMs on the edge. We apply our framework on different scales of LLMs including LLaMA, OPT, and BLOOM with 4-bit or 8-bit for the activation and 4-bit for the weight quantization. Experiments show that Agile-Quant achieves simultaneous quantization of model weights and activations while maintaining task performance comparable to existing weight-only quantization methods. Moreover, in the 8- and 4-bit scenario, Agile-Quant achieves an on-device speedup of up to 2.55x compared to its FP16 counterparts across multiple edge devices, marking a pioneering advancement in this domain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. ARM. 2023. A collection of low-level machine learning functions optimized with SIMD technologies. https://arm-software.github.io/ComputeLibrary/v22.05/.
  2. Language models are few-shot learners. NeurIPS, 33: 1877–1901.
  3. Language Models are Few-Shot Learners.
  4. A Deep Look into Logarithmic Quantization of Model Parameters in Neural Networks. In IAIT.
  5. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint arXiv:2208.07339.
  6. SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. arXiv.
  7. Heatvit: Hardware-efficient adaptive token pruning for vision transformers. In HPCA, 442–455. IEEE.
  8. QNNPACK: Open source library for optimized mobile deep learning.
  9. GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers. arXiv.
  10. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In CVPR, 2704–2713.
  11. gemmlowp: A small self-contained low-precision gemm library. Retrieved June, 14: 2018.
  12. Learned token pruning for transformers. In KDD, 784–794.
  13. Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training. arXiv.
  14. Not all patches are what you need: Expediting vision transformers via token reorganizations. arXiv.
  15. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. arXiv.
  16. FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer. In IJCAI, 1173–1179.
  17. Pointer sentinel mixture models. arXiv.
  18. Language models are unsupervised multitask learners. OpenAI blog, 1(8): 9.
  19. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research.
  20. Bloom: A 176b-parameter open-access multilingual language model. arXiv.
  21. Data Level Lottery Ticket Hypothesis for Vision Transformers. In IJCAI.
  22. LLaMA: Open and Efficient Foundation Language Models. arXiv.
  23. Attention is all you need. NeurIPS, 30.
  24. ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats. arXiv preprint arXiv:2307.09782.
  25. SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models. arXiv.
  26. Opt: Open pre-trained transformer language models. arXiv.
  27. Integer or Floating Point? New Outlooks for Low-Bit Quantization on Large Language Models. arXiv.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xuan Shen (29 papers)
  2. Peiyan Dong (18 papers)
  3. Lei Lu (55 papers)
  4. Zhenglun Kong (33 papers)
  5. Zhengang Li (31 papers)
  6. Ming Lin (65 papers)
  7. Chao Wu (137 papers)
  8. Yanzhi Wang (197 papers)
Citations (12)