Papers
Topics
Authors
Recent
2000 character limit reached

Low-Rank GEMM for Accelerated Matrix Computation

Updated 1 December 2025
  • Low-rank GEMM is a method that approximates matrix multiplication by factorizing inputs into low-rank components, reducing complexity from O(mkn) to O(r²n) when singular values decay rapidly.
  • It utilizes techniques like truncated and randomized SVD with adaptive rank selection to balance numerical precision against performance requirements.
  • Integration with low-precision arithmetic and specialized hardware, such as FP8 on GPUs, yields speedups up to 7.8× and significant memory savings in scientific and ML applications.

Low-rank GEMM (General Matrix Multiply) refers to a family of algorithms and implementations that exploit low-rank structure within matrices to accelerate matrix multiplication. These techniques use matrix factorization—typically via SVD or randomized variants—to reduce computational complexity, memory bandwidth, and energy consumption, trading a controlled loss in numerical precision for substantial gains in performance and resource efficiency. Low-rank GEMM methods are particularly effective when the matrix operands exhibit rapidly decaying singular spectra, as found in scientific computing, large ML workloads, and kernel/integral equation methods.

1. Mathematical Foundations and Core Algorithms

The foundational principle of low-rank GEMM is to approximate the input matrices ARm×kA \in \mathbb{R}^{m\times k} and BRk×nB \in \mathbb{R}^{k\times n} by low-rank factorizations. The canonical (truncated SVD) form is: AUAΣAVAT,BUBΣBVBT,with UARm×r,VARk×rA \approx U_A \Sigma_A V_A^T,\quad B \approx U_B \Sigma_B V_B^T,\qquad \text{with } U_A \in \mathbb{R}^{m\times r}, V_A \in \mathbb{R}^{k\times r} where rmin(m,k,n)r\ll \min(m, k, n) is the target rank.

The approximate product can be expressed as: C(UAΣAVAT)(UBΣBVBT)=UA[ΣA(VATUB)ΣB]VBTC \approx (U_A \Sigma_A V_A^T)(U_B \Sigma_B V_B^T) = U_A[\Sigma_A (V_A^T U_B)\Sigma_B]V_B^T This factorization reduces the dominant computational cost from O(mkn)O(mkn) to O(r2n)O(r^2 n) (assuming mnkm \approx n \approx k), so long as rnr \ll n. The quality of this approximation is guaranteed by the Eckart–Young theorem, ensuring that truncated SVD achieves the minimal possible error for a fixed rank rr in both Frobenius and spectral norm.

The choice of rr is governed by the decay of singular values and application-specific error tolerances. Typical strategies include fixed-fraction (e.g., r=0.1nr = 0.1\,n) or energy-thresholding (capturing a specified fraction τ\tau of the total matrix energy) (Metere, 24 Nov 2025).

2. Decomposition and Approximation Schemes

Two families of decomposition methods are prevalent in low-rank GEMM:

  • Exact SVD: Provides optimal approximation but requires O(n3)O(n^3) work for n×nn\times n matrices, restricting its use to moderate sizes.
  • Randomized SVD (RSVD): Approximates leading singular subspaces with high probability, achieving O((m+n)r2+r3)O((m+n)r^2 + r^3) cost and near-optimal accuracy when singular values decay rapidly (Metere, 24 Nov 2025, Gu, 27 May 2024).

Low-rank GEMM implementations may dynamically select between exact and randomized decompositions via heuristics or based on hardware characteristics (e.g., matrix size thresholds, available accelerators). Rank selection is similarly adaptive: either using fixed fractions, approximate energy coverage (e.g., rr capturing τ=0.99\tau=0.99 of AF2\|A\|_F^2), or maximizing performance under memory/bandwidth constraints.

For matrices arising in PDE or BEM contexts, especially H2\mathcal{H}^2-matrices, low-rank blockwise approximations are organized within hierarchical cluster trees and block trees, reducing both the computational and storage complexity to nearly linear regimes (Börm et al., 2014).

3. Integration with Low-Precision and Accelerated Hardware

Low-rank GEMM achieves further speed and memory benefits by integrating low-precision arithmetic and specialized hardware features. For example, FP8 quantization (E4M3 format) exploits the advanced tensor core capabilities of NVIDIA Ada/Hopper GPUs (Metere, 24 Nov 2025).

  • Quantization pipeline: Input factors (e.g., UAU_A, VAV_A etc.) are quantized to FP8 for storage and data movement, promoting memory bandwidth savings. Computations are performed in FP16 (for input operands) and accumulated in FP32 to maintain numerical stability.
  • Dynamic kernel dispatch: An AutoKernelSelector mechanism chooses between direct dense GEMM (FP32/FP16), low-rank GEMM in FP8, or hybrid pipelines depending on input sizes, ranks, and detected hardware support. For N10,240N \geq 10,240, low-rank GEMM in FP8 outperforms state-of-the-art cuBLAS dense kernels, with observed speedups up to 7.8×7.8\times and memory reduction of 75% for N=20,480N=20,480 (Metere, 24 Nov 2025).

Empirical results indicate that, for data with low intrinsic rank, low-rank FP8 GEMM achieves up to 380 TFLOPS on RTX 4090, significantly exceeding the performance of high-precision dense routines.

4. Low-Rank GEMM with Residual Correction and Quantized Approximations

In the context of mixed-precision and quantized arithmetic, residual correction techniques further improve the trade-off between speed and fidelity.

  • LRAMM (Gu, 27 May 2024): Combines RSVD-based low-rank reduction with a three-stage mixed-precision GEMM. Input matrices are compressed via RSVD, operated upon in several quantized (low-bit integer) GEMMs with optimal bit-width allocation (d1=8,d2=4,d3=8d_1=8, d_2=4, d_3=8), and reconstructed to yield the final approximant. This pipeline yields up to 4×4\times speedups over conventional INT16 GEMM at relative errors 102\sim 10^{-2} for strongly low-rank data.
  • LRQMM (Gu, 27 Sep 2024): Applies low-rank residual approximation to compensate for errors introduced by uniform quantization. After a low-bit quantized integer GEMM, the floating-point residual is sketched via RSVD and projected back, using only BLAS-2–level overhead. For large NN and small residual rank rr (e.g., r=10r=10 for N=2000N=2000), normwise errors decrease by 1–2 orders of magnitude compared to direct quantized GEMM. In deep learning tasks (e.g., ResNet-50 on ImageNet), LRQMM restores Top-1 accuracy from 8.3% (direct 4-bit quantization) to 61.8%.

These methods are "data-free"—requiring no downstream retraining—and provide practical guidelines for balancing rank, quantization depth, and speedup, particularly on modern GPU architectures.

5. Hierarchical Low-Rank GEMM in Structured Matrices

Rank-structured matrices, such as H2\mathcal{H}^2-matrices prevalent in PDE/BEM discretizations, afford almost linear complexity matrix arithmetic using hierarchical low-rank updates and recursive blockwise GEMM (Börm et al., 2014).

  • Cluster and block trees: Matrix indices are recursively organized into cluster trees; admissible far-field blocks admit VtSbWsTV_t S_{b} W_s^T representations with small kk.
  • Local low-rank update: Product updates to H2\mathcal{H}^2-blocks are made in factorized form, followed by SVD-based recompression to maintain bounded rank.
  • Recursive triple-tree GEMM: The product C=ABC=AB is constructed recursively over index triples, with local updates and recompressions at leaves. The overall arithmetic cost scales as O(k2nlogn)O(k^2 n \log n), with storage O(nk)O(n k).
  • Control of error: Local truncation error εloc\varepsilon_{\text{loc}} is set to maintain global accuracy, with theorem-level guarantees. In typical applications, observed block ranks remain bounded, resulting in mesh-independent efficiency.

Numerical experiments confirm the stated complexity and storage bounds in large-scale FEM/BEM contexts.

6. Error, Stability, and Regime of Applicability

Low-rank GEMM achieves its efficiency by explicitly controlling the trade-off between approximation error and algorithmic complexity.

  • Approximation error: By Eckart–Young, the Frobenius norm error of rank-rr SVD truncation is minimal and governed by the decay of singular values. For quantized low-rank GEMM, derived error bounds depend on the chosen rank, quantizer bit-width, and distribution of input matrix entries (Metere, 24 Nov 2025, Gu, 27 May 2024, Gu, 27 Sep 2024).
  • Residual correction: Addition of low-rank reconstructed residuals after main quantized GEMM (as in LRQMM) substantially reduces total error, at modest extra cost.
  • Stability and robustness: For full-rank or ill-conditioned matrices, or use cases requiring errors 1%\ll 1\%, low-rank GEMM may provide little or no benefit, as the required rr approaches nn. For data with moderate or rapid singular value decay (e.g., in kernel methods or most ML inference scenarios), the approach is highly effective.

Failure modes include slow singular value decay, very small matrix sizes (where decomposition overheads dominate), or stringent accuracy requirements beyond the low-rank regime.

7. Practical Guidelines and Performance Summary

Practical deployment of low-rank GEMM follows these general guidelines:

  • Rank and bit-width selection: For generic data, choose r0.1min(m,k,n)r\sim 0.1\,\min(m,k,n) and use intermediate quantization depth minimized to optimize accuracy-per-bit (e.g., d2=4d_2=4 in LRAMM).
  • Algorithm selection: Use direct dense kernels (cuBLAS FP32/FP16) for small N4000N \lesssim 4000; above this threshold, especially with hardware FP8 support, low-rank GEMM/AutoKernelSelector yields substantial speedups (Metere, 24 Nov 2025).
  • Integration: Modern low-rank GEMM systems are implemented as nn.Module/torch.autograd.Function in PyTorch, with custom CUDA/CUTLASS underlying kernels for FP8 storage and FP16 compute.
  • Empirical speed/accuracy trade-off: For N10,000N \geq 10,000, peak TFLOPS and memory savings are observed, with up to 7.8×7.8\times speedup and 75% memory reduction at 1–2% error.

A plausible implication is that, as model sizes and batch dimensions continue to increase, low-rank GEMM methods—especially those exploiting hardware-level FP8 and mixed-precision overlays—will become the standard for large-scale matrix computation in both ML and scientific computing.

Method/Concept Complexity Typical Rank rr / Error
Full GEMM (FP32) O(n3)O(n^3) nn / exact
Low-Rank GEMM (FP8) O(r2n)O(r^2 n) r0.05r\sim0.05–$0.2 n$ / 1–2%
LRAMM (mixed-precision RSVD) O(rn2)O(r n^2) r0.1nr\sim0.1 n, di=4,8d_i=4,8
LRQMM (quantized, residual) O(n2)O(n^2) (BLAS-3 + 2) r10r\sim10 (for N=2000N=2000)
H2\mathcal{H}^2-GEMM O(k2nlogn)O(k^2 n \log n) kk mesh-indep. / PDE rapid decay

Low-rank GEMM provides a powerful, theoretically grounded, and practically validated approach to large matrix multiplication, balancing the competing demands of accuracy, compute throughput, and memory efficiency across a diversity of application domains (Metere, 24 Nov 2025, Gu, 27 May 2024, Gu, 27 Sep 2024, Börm et al., 2014).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Low-Rank GEMM.