Modified Randomized Arnoldi Process
- The Modified Randomized Arnoldi Process is a method that accelerates Krylov subspace algorithms by employing randomized sketching and non-traditional orthogonalization.
- It reduces computational and communication costs in large-scale problems by correcting the Hessenberg structure to recover theoretical guarantees.
- The process offers numerical stability and robust convergence for matrix function evaluations and eigenvalue problems, enabling efficient high-dimensional computations.
The Modified Randomized Arnoldi Process is a class of algorithms designed to accelerate Krylov subspace methods by employing randomized sketching and non-traditional orthogonalization, while preserving or restoring key theoretical properties of the standard Arnoldi process. These modifications are primarily motivated by the need to reduce the cost and communication bottlenecks associated with classical Arnoldi orthogonalization in large-scale scientific computing. The resulting algorithms achieve computational and parallel efficiency, while ensuring well-conditioned bases for Krylov subspaces and delivering robust convergence and stability for matrix function evaluations and eigenvalue problems (Cortinovis et al., 2022, Grigori et al., 15 Jan 2026, Damas et al., 17 Dec 2025).
1. Foundations: Arnoldi and the Need for Modification
The classical Arnoldi process builds an orthonormal basis for the -dimensional Krylov subspace
using Gram–Schmidt orthogonalization. This leads to a Hessenberg decomposition:
where is upper Hessenberg. While robust, the per-iteration cost is for vector operations and flops for orthogonalization, with global communications per iteration—factors that constrain scalability (Grigori et al., 15 Jan 2026, Damas et al., 17 Dec 2025).
Recent approaches accelerate Arnoldi by forming bases that are only "sketch-orthonormal" or even non-orthonormal, replacing high-dimensional inner products with randomized projections, and offloading heavy operations to randomized least-squares routines, often yielding substantial speedups with provably well-conditioned bases (Damas et al., 17 Dec 2025, Cortinovis et al., 2022).
2. Core Algorithmic Structure
The Modified Randomized Arnoldi Process introduces random sketching matrices (e.g., SRHT or SRFT) to perform Gram–Schmidt or Householder-type orthogonalization in a lower-dimensional space. A typical algorithmic workflow (as in (Cortinovis et al., 2022)) is:
- Sketching: Select a random embedding matrix , with .
- Initialization: Normalize , sketch it (), and set the first basis vector .
- Iteration. For :
- Form , sketch .
- Gram–Schmidt (or Householder) re-orthogonalize against prior sketched basis vectors.
- Apply corresponding updates in full space to to obtain .
- Grow low-dimensional coefficients (Hessenberg or factors).
- Compressed Operator Construction:
Last column is corrected via a randomized least-squares:
- Approximation:
with the canonical vector, incorporating the norm of .
These steps are realized in multiple architectures: Gram–Schmidt-based (cheap sketches in the inner product), Householder-based (randomized Householder reflectors; see (Grigori et al., 2024)), "sketch-and-select" (compressive projection pursuit as in (Güttel et al., 2023)), and with implicit restarting for eigenproblems (Damas et al., 2024).
3. Theoretical Guarantees and Similarity Restoration
While randomized orthogonalization produces well-conditioned bases, the resulting Hessenberg matrices differ from those of classical Arnoldi, impairing interpretation of Ritz values and the containment of the numerical range ( may not hold). This leads to irregular convergence or loss of optimality for eigen- and matrix function problems (Grigori et al., 15 Jan 2026, Cortinovis et al., 2022).
A critical modification restores "Arnoldi similarity": after the randomized basis and are constructed, the last Arnoldi vector is re-projected to enforce
through a small least-squares problem, yielding a corrected Hessenberg similar to the standard :
with for some invertible . This modification recovers the Ritz values, ensures , and ensures that for any polynomial matrix function , gives identical projections as the classical Arnoldi method (Grigori et al., 15 Jan 2026).
4. Computational Complexity and Communication
The randomized approaches consistently achieve lower asymptotic and practical costs:
| Process | Orthogonalization Flops | Mat-vec Calls | Communication per Step |
|---|---|---|---|
| Arnoldi (Gram–Schmidt) | |||
| Randomized Gram–Schmidt | 1–2 | ||
| RHQR-Arnoldi | 1 | ||
| Sketch-and-Select | 1 |
Randomized methods offload heavy inner-product and orthogonalization computations to lower-dimensional sketched problems and often exploit Level-3 BLAS for increased efficiency in the correction phase (Cortinovis et al., 2022, Grigori et al., 15 Jan 2026, Grigori et al., 2024, Güttel et al., 2023).
The sketch dimension (or ) is generally chosen as ; embedding tolerances in typically suffice. When the basis becomes ill-conditioned, "whitening" or explicit QR recomputation in sketch space is used to restore conditioning (Cortinovis et al., 2022, Grigori et al., 2024).
5. Numerical Stability, Conditioning, and Parameter Selection
Randomized orthogonalization inherits the backward and forward stability guarantees of classical orthogonalization but shifts dependency from the matrix condition number to sketching error and embedding parameters (Damas et al., 17 Dec 2025, Grigori et al., 2024). Finite-precision analyses show, for example, that RHQR basis satisfies
with high probability, independently of the condition number of the matrix being factorized. The randomized process is unconditionally stable up to an error controlled by the sketch's embedding quality and the arithmetic precision used in sketch and small-matrix operations (Grigori et al., 2024). Parameter choices (sketch size , tolerance , mixed or low precision) are detailed with practical recommendations for high performance and stability.
6. Applications and Empirical Performance
The Modified Randomized Arnoldi Process is applicable to:
- Approximating matrix functions, e.g., matrix exponential, square root, fractional powers, with
and convergence bounded as (cf. Crouzeix):
- Computing eigenvalues and eigenvectors in large sparse matrices, utilizing implicit restarting and polynomial filtering in a randomized fashion (cf. rIRA) (Damas et al., 2024).
- High-dimensional linear system solution frameworks (randomized GMRES, sketched orthogonalization methods) (Damas et al., 17 Dec 2025).
Empirical studies demonstrate that similarity-restoring modifications eliminate convergence pathologies (spikes, stagnation) observed in naive randomized Arnoldi, delivering convergence and final accuracy identical to classical orthogonalization, but at a fraction of the cost (Grigori et al., 15 Jan 2026, Cortinovis et al., 2022). Speedups of or better are reported for large and sparse .
7. Extensions, Variants, and Related Algorithms
Several notable extensions and related approaches have been developed:
- Randomized Householder QR-based Arnoldi: Replaces traditional Gram–Schmidt with single-synchronization RHQR, matching or exceeding classic Householder QR stability, and reducing per-step communication to one global synchronization (Grigori et al., 2024).
- Sketch-and-select Arnoldi: At each iteration, only prior basis vectors are used for projection, with subset selection performed via compressive sensing heuristics, resulting in a process linear in with empirical stability close to fully orthogonalized bases (Güttel et al., 2023).
- Restarted and block variants: Efficiently combine randomized orthogonalization with implicit restarting schemes, supporting large-scale eigenvalue computations, and enabling high communication efficiency in distributed environments (Damas et al., 2024, Damas et al., 17 Dec 2025).
A common thread is the use of oblivious subspace embeddings (e.g., SRHT), randomized Gram–Schmidt or Householder procedures, and, when needed, explicit corrections to the Hessenberg structure to recover theoretical guarantees.
For complete algorithmic details, proofs of similarity restoration, recommended implementation parameters, and full empirical results, see the foundational works (Cortinovis et al., 2022, Grigori et al., 15 Jan 2026, Grigori et al., 2024, Güttel et al., 2023, Damas et al., 2024), and (Damas et al., 17 Dec 2025).