Papers
Topics
Authors
Recent
Search
2000 character limit reached

RQ-KMeans: Hierarchical Residual Quantization

Updated 2 February 2026
  • Residual Quantization (RQ)-KMeans is a hierarchical vector quantization method that iteratively refines data approximations using multi-stage k-means clustering.
  • It integrates techniques like variance regularization, beam search, and local transformations to minimize quantization error and enhance codebook efficiency.
  • This approach balances computational complexity and storage requirements while leveraging rate-distortion theory to improve tasks such as ANN search and image restoration.

Residual Quantization (RQ)-KMeans is a hierarchical vector quantization technique in which multiple stages of k-means clustering are applied to iteratively quantize the residuals of previous approximations, enabling reduced distortion in representation of high-dimensional data and increased codebook efficiency. RQ-KMeans and its regularized and transformed variants have become central in tasks like approximate nearest neighbor (ANN) search, self-supervised representation learning, and image restoration, leveraging principles from rate-distortion theory and statistical models of data decorrelation.

1. Mathematical Formulation and Core Algorithm

Residual Quantization (RQ) models a data vector xRdx \in \mathbb{R}^d by a sum of codewords selected from LL codebooks, one per stage:

x^==1Lck()(x)()\hat{x} = \sum_{\ell=1}^{L} c^{(\ell)}_{k^{(\ell)}(x)}

where ck()(x)()c^{(\ell)}_{k^{(\ell)}(x)} is the codeword assigned to xx at stage \ell. The residual for stage \ell is defined recursively:

r(0):=x,r():=r(1)ck()(x)()r^{(0)} := x, \qquad r^{(\ell)} := r^{(\ell-1)} - c^{(\ell)}_{k^{(\ell)}(x)}

At each stage, k-means is used to minimize the squared residual error:

minC(),k()xr(1)(x)ck()(x)()22\min_{C^{(\ell)}, k^{(\ell)}} \sum_{x} \| r^{(\ell-1)}(x) - c^{(\ell)}_{k^{(\ell)}(x)} \|_2^2

This process is repeated over all LL layers, with each assignment and codebook update step performed by classic k-means clustering. The total quantization error is:

E=xx=1Lck()(x)()22E = \sum_{x} \| x - \sum_{\ell=1}^{L} c^{(\ell)}_{k^{(\ell)}(x)} \|_2^2

This greedy layerwise approach ensures each subsequent codebook approximates the residual left by the previous stages, with assignments and centroid updates decoupled across data points and stages (Ferdowsi et al., 2017, Nguyen et al., 4 Feb 2025, Liu et al., 2015, Yuan et al., 2015).

2. Hierarchical Codebook Training and Encoding

The hierarchical nature of RQ enables distributed codebook capacity:

  • Stagewise Codebook Learning: Each codebook C()C^{(\ell)} is learned from residuals r(1)r^{(\ell-1)} using k-means. Initial residuals are simply the original data vectors; subsequent stages use the quantization residual from prior stages (Nguyen et al., 4 Feb 2025, Liu et al., 2015).
  • Greedy Encoding: A vector is encoded by sequentially selecting nearest codeword indices at each stage, cumulatively representing the input by the sum of selected codewords.
  • Multi-Path / Beam Search Encoding: While greedy assignment yields efficient but suboptimal encoding, beam search maintains PP candidate representations per vector, selecting codeword sequences that minimize total distortion, at polynomial computational cost. Exact encoding is NP-hard due to cross-stage interaction terms (Liu et al., 2015).
Method Complexity per Sample Encoding Quality
Greedy k-means O(KLd)O(KLd) or O(KL)O(KL) Suboptimal, no cross-term
Beam Search (width PP) O(PKL)O(PKL) Near-optimal, lower error

Stagewise k-means codebook learning and encoding form the computational backbone of most RQ frameworks (Liu et al., 2015, Nguyen et al., 4 Feb 2025).

3. Variance-Regularization and Rate-Distortion Theory

Regularized Residual Quantization (RRQ) augments RQ-KMeans with a variance regularization term consistent with reverse water-filling principles in rate-distortion theory (Ferdowsi et al., 2017):

  • Variance Regularization: Augments k-means objective with a penalty

minC,A 12XCAF2+λ2diag(CCT)SF2\min_{C, A} \ \frac{1}{2} \|X - CA\|_F^2 + \frac{\lambda}{2}\|\operatorname{diag}(CC^T) - S\|_F^2

where SS is a diagonal matrix of target variances σCj2=(σj2γ)+\sigma_{C_j}^2 = (\sigma_j^2-\gamma)^+, γ\gamma is a water-filling threshold, and λ\lambda controls tradeoff between fit and variance matching.

  • Active Dimension Selection: At each layer, only dimensions with variance above threshold γ\gamma are quantized, suppressing overfitting and enforcing sparsity in high dimensions.
  • Modified K-means Update: Assignment via nearest centroid on active dimensions, codebook update by minimizing regularized quartic terms, solved efficiently by per-row Newton steps.

This regularization yields sparse dictionaries and prevents overtraining when scaling to high nn and deep LL (Ferdowsi et al., 2017).

4. Data Preprocessing and Transformations

Effective RQ-KMeans depends on statistical whitening and decorrelation of input data, especially for natural images:

  • Global 2D-DCT: Applied to images for spectrum decay and energy compaction.
  • Subband Partition and PCA: DCT coefficients segregated into frequency bands; within each band, full-rank PCA decorrelates features, yielding approximately independent, variance-decaying coordinates.
  • Local Transforms in TRQ: Transformed Residual Quantization (TRQ) uses per-cluster orthogonal transformations (learned by orthogonal Procrustes analysis) at each stage to align cluster-specific residual subspaces before k-means, reducing overall quantization error (Yuan et al., 2015).

Such preprocessing ensures data is amenable to reverse-water-filling regularization and allows efficient RQ operation at scale (Ferdowsi et al., 2017, Yuan et al., 2015).

5. Computational and Storage Complexity

Storage and compute for RQ-KMeans scale with the number of codebooks, codewords, and dimensionality:

  • Assignment Step: At layer ll, cost O(NK(l)A(l))O(N K^{(l)} |A^{(l)}|) where A(l)A^{(l)} is the active dimension set.
  • Codebook Update: O(A(l)K(l)2×#Newton)O(|A^{(l)}| K^{(l)^2} \times \# \text{Newton}) for regularized update.
  • Total (RRQ, LL layers): O(NKAL)O(NK|A|L) assignment and O(AK2L)O(|A|K^2L) codebook update (summed over layers).
  • Test Encoding: O(lK(l)A(l))O(\sum_l K^{(l)} |A^{(l)}|).
  • Storage: O(lA(l)K(l))O(\sum_l |A^{(l)}| K^{(l)}) floats for codebooks.

TRQ introduces additional O(D2)O(D^2) per codeword per level to store D×DD \times D transformations, but remains modest compared to overall scale (Yuan et al., 2015, Ferdowsi et al., 2017).

6. Empirical Performance and Applications

RQ-KMeans and its variants exhibit improved codebook utilization, reduced distortion, and competitive downstream task performance:

  • Code Utilization: RQ achieves near 100% code usage per stage (CUR), avoiding dead codes prevalent in large single-codebook VQ (CUR often <21%) (Nguyen et al., 4 Feb 2025).
  • Quantization Quality: RRQ yields train distortion 0.85\approx 0.85 and test distortion 0.94\approx 0.94 on synthetic n=1000n=1000 data, close to rate-distortion optimum 0.9185\approx 0.9185. Standard k-means overfits (train 0.67\approx 0.67, test 1.00\approx 1.00) (Ferdowsi et al., 2017).
  • Nearest Neighbor Retrieval: Improved RVQ (IRVQ) with beam search and PCA-based warm start boosts Recall@1 and Recall@4 metrics (e.g., on SIFT1M, IRVQ Recall@1 \approx 0.38 vs RVQ 0.32; Recall@4 \approx 0.80 vs RVQ 0.70) (Liu et al., 2015).
  • Representation Learning: BRIDLE leverages RQ for self-supervised pretraining, outperforming VQ in audio, image, and video benchmarks (with effective code usage 0.03–0.05 for RQ vs 0.004–0.015 for single-codebook VQ) (Nguyen et al., 4 Feb 2025).
  • Image Restoration: RRQ restores high-frequency content in super-resolution tasks, reconstructing sharp facial images from low-res data via multi-layer codebooks (Ferdowsi et al., 2017).
Variant Key Feature Application
RRQ Variance regularization Super-resolution, decorrelated images
IRVQ Hybrid codebook + beam High-dimensional ANN search
TRQ Per-cluster transforms ANN with low distortion
BRIDLE Hierarchical RQ Self-supervised representation

In practice, these approaches combine statistical theory, efficient clustering, and practical heuristics for scalable high-accuracy quantization across diverse domains.

7. Limitations and Extensions

RQ-KMeans presents several inherent and practical challenges:

  • Greedy Encoding Suboptimality: Sequential encoding ignores cross-stage interactions, rendering the global optimum intractable (NP-hard). Beam search is a practical compromise yielding near-optimal encodings (Liu et al., 2015).
  • Performance Saturation: For classical RVQ, incremental distortion reduction wanes with increasing stages; improved variants address this by maintaining codebook entropy through hybrid training schemes.
  • Overfitting in High Dimensions: Vanilla RQ overtrains on variance-decaying, highly correlated data; variance regularization and careful preprocessing suppress this pathology (Ferdowsi et al., 2017).
  • Storage Overhead in TRQ: Local transforms increase memory requirements linearly with number of clusters and dimensions, limiting applicability in resource-constrained scenarios (Yuan et al., 2015).
  • Empirical Tuning: Hyperparameters such as number of layers, codebook size, regularization weights, and beam width demand empirical calibration for application-specific optimality.

Residual Quantization with k-means—augmented by variance regularization, subspace transforms, and multipath encoding—remains an active area of research, with ongoing advances in codebook design, scalable optimization, and integration into deep neural architectures (Ferdowsi et al., 2017, Nguyen et al., 4 Feb 2025, Yuan et al., 2015, Liu et al., 2015).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Residual Quantization (RQ)-Kmeans.