Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stochastic Lanczos Quadrature (SLQ)

Updated 16 January 2026
  • Stochastic Lanczos Quadrature (SLQ) is a randomized, matrix-free method that estimates spectral sums like log-determinants by combining random probing with Lanczos quadrature.
  • It employs short Krylov subspace recurrences to generate accurate quadrature nodes and weights, reducing the need for full matrix eigendecomposition.
  • Enhancements such as block-probe and variance reduction techniques extend SLQ’s practical application in scalable machine learning and spectral density estimation.

Stochastic Lanczos Quadrature (SLQ) is a randomized, matrix-free algorithm for approximating spectral sums and the spectral density of large Hermitian matrices, specifically targeting trace expressions of the form Trf(A)\mathrm{Tr}\,f(A), where AA is typically accessed only through matrix–vector products. In SLQ, randomized trace estimators are combined with numerical quadrature rules derived from short Krylov subspace (Lanczos) recurrences, producing accurate, high-confidence approximations for quantities such as the log-determinant, spectral measures, and spectral densities—without explicit formation or eigendecomposition of AA.

1. Algorithmic Framework and Computational Core

The SLQ method combines stochastic trace estimation with Gaussian quadrature arising from the Lanczos process. For a given Hermitian ARn×nA\in\mathbb{R}^{n\times n} and analytic ff, one seeks to approximate Trf(A)\mathrm{Tr}\,f(A) efficiently:

  1. Random Probing: NN independent random probe vectors v(i)v^{(i)} (unit-norm, often Rademacher or Gaussian) are generated. For any quadratic form,

E[vTf(A)v]=1nTrf(A).\mathbb{E}[v^T f(A) v] = \frac{1}{n} \mathrm{Tr}\,f(A).

  1. Lanczos Quadrature: For each probe v(i)v^{(i)}, run mm steps of Lanczos starting from v(i)v^{(i)} to build the tridiagonal Tm+1(i)T_{m+1}^{(i)}. Diagonalizing Tm+1(i)T_{m+1}^{(i)} yields quadrature nodes θk(i)\theta_k^{(i)} and weights τk(i)=(e1Tyk(i))2\tau_k^{(i)} = (e_1^T y_k^{(i)})^2 (with yky_k the eigenvectors of Tm+1(i)T_{m+1}^{(i)}).
  2. Spectral Trace Approximation: The ii-th probe estimates v(i)Tf(A)v(i)k=1m+1τk(i)f(θk(i))v^{(i)T}f(A) v^{(i)} \approx \sum_{k=1}^{m+1} \tau_k^{(i)} f(\theta_k^{(i)}). The overall SLQ estimate is then

Trf(A)^=nNi=1Nk=1m+1τk(i)f(θk(i)).\widehat{\mathrm{Tr}\,f(A)} = \frac{n}{N} \sum_{i=1}^N \sum_{k=1}^{m+1} \tau_k^{(i)} f(\theta_k^{(i)}).

For logdet(A)=TrlogA\log\det(A) = \mathrm{Tr}\log A, simply set f(t)=logtf(t) = \log t (Li et al., 2023, Chen et al., 2022).

The cost is dominated by NN sets of mm matrix–vector multiplications, making the total MVM count O(Nm)O(Nm), with mnm \ll n in most practical regimes.

2. Error Analysis, Node Asymmetry, and Complexity Guarantees

Error in SLQ arises from two distinct sources: quadrature (Lanczos) error and stochastic (Monte Carlo probe) error.

  • Quadrature Error for Asymmetric Nodes: The error bound for (m+1)(m+1)-point Lanczos quadrature with potentially asymmetric nodes is

IIm4Mρ1ρ1ρ(2m+2),|I - I_m| \leq \frac{4 M_\rho}{1 - \rho^{-1}} \rho^{-(2m+2)},

where Mρ=maxzEρf(z)M_\rho = \max_{z \in E_\rho} |f(z)|, and EρE_\rho is the Bernstein ellipse containing the spectrum of AA (Li et al., 2023).

  • Combined Probabilistic SLQ Bounds: For AA with spectrum in [λmin,λmax][\lambda_{\min}, \lambda_{\max}], fixing accuracy ϵ\epsilon and failure probability δ\delta, explicit formulas for mm and NN provide

P(logdetAlogdetA^ϵn)1δ,\mathbb{P}\left( |\log\det A - \widehat{\log\det A}| \leq \epsilon n \right) \geq 1-\delta,

with m=O(log(1/ϵ))m = O(\log(1/\epsilon)), N=O(ϵ2log(1/δ))N = O(\epsilon^{-2}\log(1/\delta)) when using the optimal split of error budget between quadrature and stochastic terms (Li et al., 2023, Chen et al., 2022, Chen et al., 2021).

  • Symmetric vs. Asymmetric Nodes: Classical analyses assumed symmetry of the Lanczos quadrature nodes (valid for constant-diagonal tridiagonals), yielding slightly tighter bounds. However, this property fails generically—realistic AA typically yields asymmetric Ritz values—and worst-case guarantees must use the more general asymmetric-node rate. Asymmetric bounds have a denominator 1ρ11-\rho^{-1} versus 1ρ21-\rho^{-2} for the symmetric case, resulting in slightly looser but universally applicable guarantees (Li et al., 2023).
  • Complexity: For analytic ff and fixed ϵ\epsilon, the total number of mat–vecs is typically O(nmN)O(n m N), which is near-linear in nn since mnm \ll n and N=O(ϵ2)N = O(\epsilon^{-2}) for practical values of ϵ\epsilon. This achieves high-probability accuracy with respect to both absolute and relative error criteria (Li et al., 2023, Chen et al., 2021).

3. Extensions, Deflation, and Variance Reduction

Several recent advances refine the SLQ methodology:

  • Block-Probe SLQ / BOLT: Extending SLQ to use orthonormal block-probes (BOLT algorithm), block Lanczos iteration, and block quadrature increases efficiency, especially in near-flat-spectrum regimes. For a fixed total MVM budget, block SLQ yields O(1/Nmv)O(1/N_{\mathrm{mv}}) error compared to O(Nmv1/2)O(N_{\mathrm{mv}}^{-1/2}) for classical SLQ, matching the optimal rate for unbiased trace estimation (Yeon et al., 18 May 2025).
  • Variance-Reduced SLQ: By combining PCPS-style projection subspaces and Hutchinson estimators on the residual, one can decrease stochastic variance and accelerate convergence, especially for log-determinant estimation (Han et al., 2023).
  • Adaptive One-Probe and "Log-Det-ective" Strategies: Applying Nyström or similar low-rank preconditioners before SLQ can, in regimes of rapid spectral decay, enable high-accuracy log-determinant estimation with a single Gaussian probe, with variance bounded by the tail of the spectrum. Adaptive algorithms can cheaply certify when more SLQ probes are justified (Cortinovis et al., 9 Jan 2026).
  • Implicit Deflation via Krylov Subspaces: Even without explicit eigenpair removal, the single-vector Krylov subspace generated in SLQ aligns rapidly with dominant eigenspaces, achieving low-rank approximation and yielding error bounds in W1W_1 (Wasserstein) distance that scale with the singular value tail ϵσ+1(A)\epsilon \cdot \sigma_{\ell+1}(A), provided m=O(logn+1/ϵ)m = O(\ell \log n + 1/\epsilon) (Bhattacharjee et al., 2024).

4. Spectral Measure, Spectral Density, and Applications

SLQ provides not just single spectral traces, but strong uniform-in-xx guarantees for cumulative empirical spectral measure (CESM) and for spectral densities:

  • Spectral Measure Approximation: SLQ rapidly approximates the CESM with Wasserstein error scaling as O(1/k)O(1/k) for kk Lanczos steps, achieving probabilistic control over the entire spectral distribution (Chen et al., 2021).
  • Spectral Density: For applications demanding the density of states (DOS), SLQ's direct sum-of-delta-masses output can be convolved to obtain a smooth spectral density, with empirical performance surpassing Chebyshev-based kernel polynomial methods (KPM) in Wasserstein distance under rapid spectral decay or presence of spikes/gaps in spectrum (Bhattacharjee et al., 2024, Chen et al., 2022).
  • Log Determinant and Trace Estimation: SLQ is widely used in machine learning and statistics for scalable approximation of logdetK\log\det K for kernel matrices, computation of Kullback–Leibler divergence, free energy, and Hessian spectrum analysis.
  • Proxy KL and Wasserstein Estimators under Partial Access: Subblock SLQ and BOLT approaches can yield unbiased estimators for KL divergence and W2W_2 distance between Gaussians, even with only partial access to AA (e.g., reading principal minors) (Yeon et al., 18 May 2025).

SLQ and kernel polynomial methods (KPM) are leading randomized matrix-free quadrature techniques, but exhibit distinct practical properties:

  • Convergence Rate: SLQ (Gauss quadrature) and KPM (Jackson-damped Chebyshev) both achieve exponential decay in mm for analytic ff, but SLQ’s adaptive node placement enables much faster convergence in spectra with large clusters or gaps.
  • Spectral Adaptivity: SLQ's quadrature nodes adapt automatically to the spectrum, concentrating quadrature near eigenclusters and away from gaps or outliers, whereas KPM distributes nodes uniformly after rescaling and requires explicit tuning for concentrated spectra.
  • Implementation Complexity: SLQ's implementation requires only Lanczos and small matrix diagonalizations per probe, whereas KPM necessitates Chebyshev moment computation and post-processing for density smoothing.
  • Recommended Use: SLQ is generally advantageous when only a small number of spectral traces are needed and accuracy is prioritized, while KPM remains competitive when smooth spectral densities are required over large intervals (Chen et al., 2022).
  • Empirical Performance: On a range of spectra (uniform, gapped, low-rank, real-world graphs), SLQ and its block/projection variants achieve or exceed the performance of KPM and classical randomized SVD trace estimators, with optimal scaling for spectral sum accuracy (Yeon et al., 18 May 2025, Han et al., 2023, Bhattacharjee et al., 2024).

6. Parameter Selection and Practical Guidelines

Parameter choice in SLQ (number of probes NN, Lanczos degree mm):

  • Error Tolerance: For analytic ff, achieve ϵ\epsilon-accuracy with mO(log(1/ϵ))m \sim O(\log(1/\epsilon)) and NO(ϵ2log(n/δ))N \sim O(\epsilon^{-2}\,\log(n/\delta)).
  • Spectral Range Estimation: Estimate λmin\lambda_{\min} and λmax\lambda_{\max} (power/Lanczos iteration) to compute the required quadrature ellipse parameters for error bounds.
  • Optimal Error Budget Split: Utilize non-uniform allocation of the error budget between quadrature and stochastic errors to minimize total MVMs, solving the transcendental minimization for α\alpha when necessary (Li et al., 2023).
  • Block- and Subblock-Extensions: Use block probes and subblock schemes for highly parallelizable, unbiased trace estimation under memory or access constraints, or to obtain monotonic convergence in partial-access regimes (Yeon et al., 18 May 2025, Bhattacharjee et al., 2024).
  • Preconditioning and Projection: For matrices with fast singular value decay, preconditioned SLQ—leveraging low-rank sketch-based or Nyström approximations—allows drastic probe reduction and high accuracy, particularly for log-determinant problems (Cortinovis et al., 9 Jan 2026, Han et al., 2023).

7. Numerical Results and Empirical Observations

Empirical studies illustrate sharp convergence of SLQ and its variants:

  • Spectral Convergence: Error in approximating DOS, CESM, and trace decays rapidly with Lanczos degree; presence of spectral gaps accelerates convergence.
  • One-sample Preconditioned SLQ: In many SPD matrices with moderate spectral decay, a single Gaussian probe combined with Nyström preconditioning suffices to yield accurate log-determinant estimates with negligible bias (Cortinovis et al., 9 Jan 2026).
  • Variance Reduction and Subspace Approaches: Projection-based variants offer 30–50% MVM savings over vanilla SLQ, with conservative but reliable probabilistic error estimates (Han et al., 2023).
  • Block- and Subblock Variants for Partial-Access: BOLT and subblock SLQ approaches maintain unbiasedness and accuracy, outperforming Hutch++ and traditional methods in flat-spectrum or principal submatrix-only regimes (Yeon et al., 18 May 2025).
  • Superiority over KPM: On standard test cases (e.g., Heisenberg spin chain, kernel matrices), SLQ avoids typical KPM artifacts (Gibbs ripples), and VR-SLQ achieves leading-order error with negligible weight-variance corrections (Bhattacharjee et al., 2024, Chen et al., 2022).

In summary, SLQ and its modern extensions form a principled, flexible, and highly efficient toolkit for spectral sum and density approximation of large Hermitian matrices in both full and partial-access computational environments, supported by rigorous probabilistic error guarantees and strong empirical validation (Li et al., 2023, Chen et al., 2022, Bhattacharjee et al., 2024, Yeon et al., 18 May 2025, Cortinovis et al., 9 Jan 2026, Han et al., 2023, Chen et al., 2021).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Stochastic Lanczos Quadrature (SLQ).