High-Rate Nested-Lattice Quantized Matrix Multiplication with Small Lookup Tables (2505.13164v1)
Abstract: Recent work have shown that the quantization for matrix multiplication problem can be optimally solved by quantizing each column in each matrix using a nested lattice code, and then multiplying the de-quantized matrices. It was further demonstrated that when product codes of sub-dimension $d$ and rate $R$ are used, the de-quantization and inner product operations can be implemented with querying a lookup table (LUT) of size $2{2dR}$, but this is only useful when $dR$ is sufficiently small. This in turn limits LUT-based inner product decoding to low-rate quantizers. In this work, we develop a rate $R$ hierarchical nested lattice quantization framework, which quantizes each vector to $M$ layers, and admits LUT-based inner product decoding using an LUT of size $2{2d\frac{R}{M}}$, allowing for high-rate quantization. We provide analytic bounds on the loss of the developed scheme compared to standard nested lattice quantizers, and also numerically illustrate that this loss is negligible. Thus, our scheme enables to use small LUTs without compromising the overall distortion.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.