LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models (2206.09557v4)
Abstract: Recent advances in self-supervised learning and the Transformer architecture have significantly improved NLP, achieving remarkably low perplexity. However, the growing size of NLP models introduces a memory wall problem during the generation phase. To mitigate this issue, recent efforts have focused on quantizing model weights to sub-4-bit precision while preserving full precision for activations, resulting in practical speed-ups during inference on a single GPU. However, these improvements primarily stem from reduced memory movement, which necessitates a resource-intensive dequantization process rather than actual computational reduction. In this paper, we introduce LUT-GEMM, an efficient kernel for quantized matrix multiplication, which not only eliminates the resource-intensive dequantization process but also reduces computational costs compared to previous kernels for weight-only quantization. Furthermore, we proposed group-wise quantization to offer a flexible trade-off between compression ratio and accuracy. The impact of LUT-GEMM is facilitated by implementing high compression ratios through low-bit quantization and efficient LUT-based operations. We show experimentally that when applied to the OPT-175B model with 3-bit quantization, LUT-GEMM substantially accelerates token generation latency, achieving a remarkable 2.1$\times$ improvement on a single GPU when compared to OPTQ, which relies on the costly dequantization process.
- Gunho Park (5 papers)
- Baeseong Park (12 papers)
- Minsub Kim (4 papers)
- Sungjae Lee (16 papers)
- Jeonghoon Kim (17 papers)
- Beomseok Kwon (7 papers)
- Se Jung Kwon (26 papers)
- Byeongwook Kim (21 papers)
- Youngjoo Lee (12 papers)
- Dongsoo Lee (30 papers)