Polynomial Eigenvalue Method
- Polynomial eigenvalue method is an algorithmic framework that finds eigenvalues and eigenvectors of matrix polynomials without explicit determinant computation.
- It employs trace-based Laguerre iterations and QR factorizations to achieve reliable, backward-stable convergence in applications like vibration analysis and control theory.
- The method exploits structural sparsity (e.g., Hessenberg and tridiagonal cases) to reduce computational complexity while maintaining high numerical accuracy.
A polynomial eigenvalue method refers to algorithmic frameworks for determining the eigenvalues and corresponding eigenvectors of matrix polynomials—expressions of the form with and . The central problem involves finding scalars such that and nonzero vectors satisfying . This problem arises in vibration analysis, control theory, and diverse engineering applications. The eigenvalues may be finite or infinite, with the latter associated to the reversal polynomial. Modern polynomial eigenvalue methods eschew forming the determinant explicitly, instead relying on stable iterative techniques—most notably the Laguerre root-finding iteration adapted for matrix polynomials (Cameron et al., 2017).
1. Formulation and Structure of the Matrix Polynomial Eigenvalue Problem
The matrix polynomial eigenvalue problem (PEP) is formalized as seeking roots of the scalar polynomial , which encompasses both finite and infinite eigenvalues. Infinite eigenvalues correspond to zeros of the reversal polynomial, , with . The eigensystem encompasses eigenvalues (including multiplicities), and associated eigenvectors such that . Particularly, the underlying algebraic multiplicity is governed by the regularity condition .
Traditional approaches proceed via linearization, embedding in a larger matrix pencil to exploit well-established GEP solvers. However, such linearization can compromise numerical conditioning and stability in practice (You et al., 2017). Purpose-built iterative root-finding methods act directly on the matrix polynomial's spectral structure, enabling backward-stable computation with quantifiable error control (Cameron et al., 2017, Bini et al., 2012).
2. Laguerre Iteration Applied to Matrix Polynomials
Distinctive among root-finding techniques for matrix polynomials is the trace-based Laguerre iteration. The scalar Laguerre method employs
where the update step is
In the matrix setting, and are stably computed via Jacobi's formula and matrix equations:
Each iteration thus solves two linear matrix equations—most conveniently via QR factorization—and evaluates only their trace, avoiding direct computation of . The iteration proceeds until convergence is detected by residual norm, backward-error bound, or stagnation in the update (Cameron et al., 2017).
3. Initialization via Numerical Range and Newton Polygon
Robust convergence of any simultaneous root-finding method critically depends on high-quality initial guesses for eigenvalues. Two principal approaches are detailed:
- Newton-polygon radii: Construct the scalar polynomial . The Newton polygon yields a sequence of radii; abscissas supply points on circles where , for .
- Numerical-range roots: Define for some . Using column-pivoted QR of and , scalar polynomials in the form are solved for each column , producing root clusters aligned to spectral features of . For self-adjoint , eigenvalues in are bounded by upper Pellet bounds.
This principled selection of starting points is especially critical for highly clustered spectra and underpins the rapid global convergence witnessed in practice (Cameron et al., 2017).
4. Stopping Criteria, Convergence, and Backward Stability
Iteration on candidate eigenvalues is halted when one of three criteria is met:
- Small residual: If the smallest diagonal of the QR factorization of falls below , convergence is declared. The threshold is set by
with , .
- Backward-error bound: If (the Tisseur bound), is accurate to precision .
- Stagnation: If , or iteration reaches its maximum.
For computed meeting either of the first two, the eigenpair is guaranteed backward stable to within . This explicitly quantifies the numerical reliability of the computed spectrum (Cameron et al., 2017).
5. Computation of Eigenvectors and Condition Numbers
Upon convergence of :
- Right and left eigenvectors are constructed using the QR factorization. If a small pivot exists, the right eigenvector is obtained by solving a reduced system, with the left eigenvector set to the final column of . Otherwise, inverse iteration is applied to or .
- The normwise condition number of the simple eigenpair is
with analogous expressions for zero or infinite eigenvalues. This facilitates sensitivity analysis and quantifies perturbation effects (Cameron et al., 2017).
6. Exploiting Structural Sparsity: Hessenberg and Tridiagonal Cases
Significant computational savings arise when the sequence is structured (Hessenberg or tridiagonal):
- Hyman's method evaluates and derivatives in (Hessenberg) or (tridiagonal), rather than per evaluation.
- The approach analyzes the last row of , employing forward and adjoint solves to assemble determinant and derivative quantities, delivering log-derivative formulas:
This reduces per-iteration cost sharply relative to general dense cases, with tridiagonal complexity scaling as . Efficient core operations enable the method’s viability for large-scale or highly structured eigenproblems typical in applied settings (Cameron et al., 2017).
7. Performance, Numerical Experiments, and Scope
Benchmarks confirm:
- For scalar polynomials (), the method matches or outperforms established algorithms (POLZEROS, AMVW) in speed and accuracy, with scaling.
- For tridiagonal and Hessenberg problems, computational cost corresponds to and , respectively—substantially reduced from dense cases.
- Backward errors remain within a few units of machine precision, matching premier solvers (QUADEIG).
- Detailed forward error plots and tables demonstrate stability and competitiveness across a wide spectrum of eigenvalue problem sizes and structures.
The combination of Laguerre’s method, trace-based evaluation, spectrum-respecting initialization, and efficient structure exploitation yields a unified, backward-stable, and robust computational apparatus for polynomial eigenvalue problems (Cameron et al., 2017).
References
- Cameron, Steckley. "On the application of Laguerre's method to the polynomial eigenvalue problem" (Cameron et al., 2017).
- Bini, Noferini. "Numerical methods for polynomial root-finding."
- Tisseur. "Backward error analysis in matrix polynomials."
- Plestenjak, Bini et al., "Tridiagonal and Hessenberg solvers for matrix polynomials."
- AMVW: Aurentz, Mach, Vandebril, Watkins. "Structured Polynomial Eigenvalue Problems."
- QUADEIG: Aurentz, Mach, Vandebril, Watkins, "Fast and backward stable computation of the eigenvalues and eigenvectors of matrix polynomials."