Papers
Topics
Authors
Recent
2000 character limit reached

Iterative Davidson Algorithm

Updated 22 December 2025
  • Iterative Davidson Algorithm is a projection-based Krylov subspace method designed to compute a few extremal eigenvalues and eigenvectors of large, sparse, or Hermitian matrices.
  • It employs a two-phase cycle of subspace projection and expansion by solving a preconditioned correction equation, balancing computational efficiency and convergence speed.
  • Robust variants such as Jacobi-Davidson and block/multigrid forms are widely applied in quantum chemistry, lattice QCD, and quantum computing for scalable eigenproblem solutions.

The Iterative Davidson Algorithm refers to a family of projection-based Krylov subspace methods tailored for computing a few extremal eigenvalues and associated eigenvectors (or singular vectors) of large, sparse, typically Hermitian matrices or matrix pairs. Key variants—such as the classical Davidson, Jacobi-Davidson, and block/multigrid-accelerated forms—combine rapid, residual-driven subspace expansion with preconditioned correction vectors, enabling robust, scalable solution of both standard and generalized eigenproblems across domains including quantum chemistry, electronic structure, lattice QCD, and quantum computing. The method's distinguishing feature is an iterative, adaptive two-phase cycle: (1) subspace projection and approximate eigenpair extraction, followed by (2) subspace expansion via (possibly preconditioned and/or inexact) solutions of a low-dimensional correction equation related to the eigenvector residual.

1. Algorithmic Structure and Principle

The core iterative Davidson framework comprises the following cycle:

  1. Subspace Construction: Build an orthonormal basis VmV_m for the current subspace, with m≪Nm \ll N.
  2. Projected Eigenproblem: Project the large matrix AA (or operator QQ) into the subspace to form H=VmHAVmH = V_m^H A V_m (analogous for generalized or SVD settings).
  3. Ritz Pair Extraction: Solve the small eigenproblem H y=θ yH\,y = \theta\,y and form Ritz vectors u=Vmyu = V_m y; select the eigenpair(s) of interest (θ\theta closest to a specified shift, e.g., near-zero or target).
  4. Residual Computation: For each approximate eigenpair (θ,u)(\theta, u), evaluate the residual r=Au−θur = A u - \theta u. This quantifies distance from true invariance.
  5. Correction Equation (Subspace Expansion): Expand the subspace by solving a correction equation. For standard Davidson:

(A−θI)d=−r(A - \theta I) d = -r

For Jacobi-Davidson (JD), the expansion is restricted to orthogonal complement:

(I−uuH)(A−θI)(I−uuH)d=−r,d⊥u(I - u u^H) (A - \theta I) (I - u u^H) d = -r, \quad d \perp u

In practice, this system is rarely solved exactly; iterative or preconditioned inexact solutions are used.

  1. Orthonormalization, Restart, and Locking: The new vector(s) dd are orthonormalized against the current subspace and, if converged, locked. When mm reaches a maximum, a restart is performed, retaining the best kk vectors.

This sequence is repeated until residual norms for desired eigenpairs drop below a convergence threshold (Wu et al., 2015, Frommer et al., 2020, Ravibabu, 2019).

2. Correction Equations, Preconditioning, and Inexact Solves

The correction equation is the principal means of subspace enrichment and a critical determinant of both convergence speed and robustness:

  • Preconditioning: The most basic Davidson preconditioner uses the inverse of the matrix diagonal, but state-of-the-art implementations employ block preconditioners, domain decomposition (DD), multigrid acceleration (AMG), or minimal auxiliary basis approximations. These can capture structure (e.g., multipole effects in quantum chemistry, local coherence in QCD) and significantly reduce the effective condition number (Zhou et al., 26 Apr 2024, Frommer et al., 2020, Liang et al., 2022).
  • Inexact/Iterative Inner Solves: For large problems, the correction equation is typically solved to modest accuracy using an iterative Krylov method (e.g., GMRES, MINRES, FGMRES), with preconditioning (Huang et al., 2017, Huang et al., 19 Apr 2024). Theory and extensive tests show that low to moderate inner tolerance (e.g., 10−310^{-3}–10−410^{-4}) is sufficient for the outer iteration to mimic exact behavior, drastically reducing computational cost.
  • Block and Multilevel Variants: Simultaneous computation of multiple eigenpairs (block methods) and multilevel preconditioning (two-level Schwarz, aggregation AMG) are essential for robust parallelization and rapid convergence, especially for clustered or multiple eigenvalues (Liang et al., 2022, Frommer et al., 2020).

3. Canonical Variants and Applications

A partial taxonomy is as follows:

Method Target Problem Type Notable Features
Classical Davidson Hermitian eigenproblem Diagonal preconditioner, minimal residual expansion
Jacobi-Davidson (JD) Hermitian/generalized Projected, constrained correction equation
JD for SVD (JDSVD, IPJDSVD) SVD/GSVD Dual subspaces, block indefinite correction
Multigrid-accelerated Davidson Lattice QCD, large N AMG V-cycle as inner preconditioner
Block/Two-level BPJD Multiple/clustered eig. Parallel, Schwarz DD preconditioning
Quantum Davidson (QDavidson) Quantum simulation Iterative, shallow-circuit Krylov subspace growth

Classic applications include low-lying eigenstate computation in molecular electronic structure (TDDFT/CI), lattice QCD, SVD/GSVD for regularization and dimension reduction, and simulation of quantum dynamics on NISQ hardware (Berthusen et al., 12 Jun 2024, Frommer et al., 2020, Liang et al., 2022, Sharma et al., 2014).

4. Theoretical Properties and Convergence

  • Convergence Behavior: The iterative Davidson algorithm exhibits linear or superlinear convergence, largely governed by the accuracy of inner solves and the quality of preconditioning. The method typically requires O(k)O(k) outer steps for kk eigenpairs, and robust preconditioning yields O(1)–O(10) inner Krylov iterations per step with overall runtime nearly linear in both kk and problem size NN (Frommer et al., 2020, Huang et al., 2017).
  • Robustness to Degeneracy and Clusters: Block and preconditioned variants maintain performance for tightly clustered or exactly degenerate eigenvalues (Liang et al., 2022, Huang et al., 19 Apr 2024).
  • Stagnation and Defectiveness: In standard Jacobi-Davidson, subspace expansion can stagnate if the correction equation produces a vector in the current subspace; this is linked to defective Ritz values in the projected problem. Safeguards involve fallback to residual expansion and alternative correction equations (Wu et al., 2015, Ravibabu, 2019).
  • Optimality and Scalability: For multigrid and domain-decomposition-preconditioned algorithms, contraction factors are independent of mesh size hh (in PDE cases) and robust with respect to eigenvalue gaps; increasing the number of subdomains or updating AMG prolongations dynamically improves scalability (Frommer et al., 2020, Liang et al., 2022).

5. Modern Extensions: Generalizations and Quantum Algorithms

  • Low-Rank and Manifold-Constrained Methods: For eigenproblems whose eigenvectors are (approximately) low-rank matrices, low-rank Jacobi-Davidson restricts both residual and correction equation to the fixed-rank manifold, reducing storage and computational complexity (Rakhuba et al., 2017).
  • Generalized Davidson for GSVD and SVD: For matrix pairs (A,B)(A,B) or singular value problems, generalized and multidirectional Davidson extends expansion to higher-order blocks or multiple directions per cycle, with thick restarts and SVD-based subspace extraction (Zwaan et al., 2017, Huang et al., 2017).
  • Quantum Davidson Algorithms: On noisy intermediate-scale quantum (NISQ) devices, QDavidson adaptively constructs Krylov subspaces by iterative measurement and residual-based correction, realizing rapid convergence for both ground and excited states with dramatically reduced circuit depth compared to quantum Lanczos or fixed Krylov builds. Subspace matrix elements are estimated via Hadamard or swap tests, residuals are formed via quantum evolution or operator application, and classical postprocessing solves the small projected eigenproblem (Tkachenko et al., 2022, Berthusen et al., 12 Jun 2024).

6. Performance, Implementation, and Best Practices

  • Preconditioning: Optimal choice of preconditioner is problem-dependent. The "rid" minimal auxiliary basis preconditioner exemplifies a modern, spectrum-aware approach that reduces residuals rapidly and avoids stagnation (Zhou et al., 26 Apr 2024).
  • Inner/Outer Tolerances: Conservative tolerances (typically 10−410^{-4}) for inner solves balance accuracy and computational expense; best practices include dynamic monitoring and restart strategies.
  • Restart and Deflation: Thick restart and locking of converged vectors are routine techniques to control subspace growth and ensure memory efficiency, particularly critical for large-scale or block computations (Frommer et al., 2020, Zwaan et al., 2017).
  • Parallelization and Scalability: Block variants and two-level (coarse/fine) preconditioners enable highly parallel execution with minimized communication and memory requirements, as evidenced in large-scale electronic structure and QCD applications (Liang et al., 2022, Frommer et al., 2020).

In summary, the Iterative Davidson Algorithm and its modern variants constitute a canonical approach for large-scale eigenvalue, singular value, and generalized matrix problems, unifying residual-based adaptive subspace expansion, tailored and inexact preconditioning, and robust extraction principles. Recent innovations bridge classical and quantum computing, low-rank manifold constraints, and advanced domain decomposition, confirming the method's role as a central tool for high-performance scientific computation (Wu et al., 2015, Frommer et al., 2020, Zhou et al., 26 Apr 2024, Rakhuba et al., 2017, Huang et al., 2017, Liang et al., 2022, Huang et al., 19 Apr 2024, Zwaan et al., 2017, Tkachenko et al., 2022, Berthusen et al., 12 Jun 2024, Sharma et al., 2014, Ochi et al., 2016).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Iterative Davidson Algorithm.