Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models (2206.09557v4)

Published 20 Jun 2022 in cs.DC and cs.CL

Abstract: Recent advances in self-supervised learning and the Transformer architecture have significantly improved NLP, achieving remarkably low perplexity. However, the growing size of NLP models introduces a memory wall problem during the generation phase. To mitigate this issue, recent efforts have focused on quantizing model weights to sub-4-bit precision while preserving full precision for activations, resulting in practical speed-ups during inference on a single GPU. However, these improvements primarily stem from reduced memory movement, which necessitates a resource-intensive dequantization process rather than actual computational reduction. In this paper, we introduce LUT-GEMM, an efficient kernel for quantized matrix multiplication, which not only eliminates the resource-intensive dequantization process but also reduces computational costs compared to previous kernels for weight-only quantization. Furthermore, we proposed group-wise quantization to offer a flexible trade-off between compression ratio and accuracy. The impact of LUT-GEMM is facilitated by implementing high compression ratios through low-bit quantization and efficient LUT-based operations. We show experimentally that when applied to the OPT-175B model with 3-bit quantization, LUT-GEMM substantially accelerates token generation latency, achieving a remarkable 2.1$\times$ improvement on a single GPU when compared to OPTQ, which relies on the costly dequantization process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Gunho Park (5 papers)
  2. Baeseong Park (12 papers)
  3. Minsub Kim (4 papers)
  4. Sungjae Lee (16 papers)
  5. Jeonghoon Kim (17 papers)
  6. Beomseok Kwon (7 papers)
  7. Se Jung Kwon (26 papers)
  8. Byeongwook Kim (21 papers)
  9. Youngjoo Lee (12 papers)
  10. Dongsoo Lee (30 papers)
Citations (55)
X Twitter Logo Streamline Icon: https://streamlinehq.com