Papers
Topics
Authors
Recent
Search
2000 character limit reached

Polynomial Eigenvalue Method

Updated 30 December 2025
  • Polynomial eigenvalue method is an algorithmic framework that finds eigenvalues and eigenvectors of matrix polynomials without explicit determinant computation.
  • It employs trace-based Laguerre iterations and QR factorizations to achieve reliable, backward-stable convergence in applications like vibration analysis and control theory.
  • The method exploits structural sparsity (e.g., Hessenberg and tridiagonal cases) to reduce computational complexity while maintaining high numerical accuracy.

A polynomial eigenvalue method refers to algorithmic frameworks for determining the eigenvalues and corresponding eigenvectors of matrix polynomials—expressions of the form P(λ)=i=0dλiAiP(\lambda) = \sum_{i=0}^d \lambda^i A_i with AiCn×nA_i \in \mathbb{C}^{n \times n} and Ad0A_d \neq 0. The central problem involves finding scalars λ\lambda such that detP(λ)=0\det P(\lambda) = 0 and nonzero vectors xx satisfying P(λ)x=0P(\lambda)x = 0. This problem arises in vibration analysis, control theory, and diverse engineering applications. The eigenvalues may be finite or infinite, with the latter associated to the reversal polynomial. Modern polynomial eigenvalue methods eschew forming the determinant explicitly, instead relying on stable iterative techniques—most notably the Laguerre root-finding iteration adapted for matrix polynomials (Cameron et al., 2017).

1. Formulation and Structure of the Matrix Polynomial Eigenvalue Problem

The matrix polynomial eigenvalue problem (PEP) is formalized as seeking roots λ\lambda of the scalar polynomial p(λ)=detP(λ)p(\lambda)=\det P(\lambda), which encompasses both finite and infinite eigenvalues. Infinite eigenvalues correspond to zeros of the reversal polynomial, revP(ρ)=i=0dρdiAi\mathrm{rev}\,P(\rho) = \sum_{i=0}^d \rho^{d-i} A_i, with ρ=1/λ\rho=1/\lambda. The eigensystem encompasses ndnd eigenvalues (including multiplicities), and associated eigenvectors xx such that P(λ)x=0P(\lambda)x=0. Particularly, the underlying algebraic multiplicity is governed by the regularity condition Ad0A_d\neq 0.

Traditional approaches proceed via linearization, embedding P(λ)P(\lambda) in a larger matrix pencil L(λ)=AλBL(\lambda) = A - \lambda B to exploit well-established GEP solvers. However, such linearization can compromise numerical conditioning and stability in practice (You et al., 2017). Purpose-built iterative root-finding methods act directly on the matrix polynomial's spectral structure, enabling backward-stable computation with quantifiable error control (Cameron et al., 2017, Bini et al., 2012).

2. Laguerre Iteration Applied to Matrix Polynomials

Distinctive among root-finding techniques for matrix polynomials is the trace-based Laguerre iteration. The scalar Laguerre method employs

S1(λ)=p(λ)p(λ),S2(λ)=(S1(λ))=i=1N11(λri)2S_1(\lambda) = \frac{p'(\lambda)}{p(\lambda)}, \quad S_2(\lambda) = -\bigl(S_1(\lambda)\bigr)' = \sum_{i=1}^{N_1} \frac{1}{(\lambda - r_i)^2}

where the update step is

λ^=λN1S1(λ)±(N11)(N1S2(λ)S1(λ)2)\hat{\lambda} = \lambda - \frac{N_1}{S_1(\lambda) \pm \sqrt{ (N_1-1)(N_1 S_2(\lambda) - S_1(\lambda)^2 )}}

In the matrix setting, S1(λ)S_1(\lambda) and S2(λ)S_2(\lambda) are stably computed via Jacobi's formula and matrix equations:

P(λ)X1(λ)=P(λ),P(λ)X2(λ)=P(λ)P(\lambda) X_1(\lambda) = P'(\lambda), \quad P(\lambda) X_2(\lambda) = P''(\lambda)

S1(λ)=trace(X1(λ)),S2(λ)=trace(X1(λ)2X2(λ))S_1(\lambda) = \operatorname{trace}(X_1(\lambda)),\quad S_2(\lambda) = \operatorname{trace}( X_1(\lambda)^2 - X_2(\lambda) )

Each iteration thus solves two n×nn\times n linear matrix equations—most conveniently via QR factorization—and evaluates only their trace, avoiding direct computation of detP(λ)\det P(\lambda). The iteration proceeds until convergence is detected by residual norm, backward-error bound, or stagnation in the update (Cameron et al., 2017).

3. Initialization via Numerical Range and Newton Polygon

Robust convergence of any simultaneous root-finding method critically depends on high-quality initial guesses for eigenvalues. Two principal approaches are detailed:

  • Newton-polygon radii: Construct the scalar polynomial w(λ)=i=0dAiλiw(\lambda)=\sum_{i=0}^d \|A_i\|\,\lambda^i. The Newton polygon yields a sequence of radii; abscissas 0=k0<k1<<kq=d0 = k_0 < k_1 < \dots < k_q = d supply points on circles λ=ri|\lambda|=r_i where ri=aki1aki1/(kiki1)r_i = |\frac{a_{k_{i-1}}}{a_{k_i}}|^{1/(k_i-k_{i-1})}, for aj=Aja_j = \|A_j\|.
  • Numerical-range roots: Define W(P)={λC:xP(λ)x=0W(P) = \{\lambda \in \mathbb{C} : x^*P(\lambda)x=0 for some x=1}\|x\|=1\}. Using column-pivoted QR of A0A_0 and AdA_d, scalar polynomials in the form qjP(λ)qj=0q_j^*P(\lambda)q_j = 0 are solved for each column qjq_j, producing root clusters aligned to spectral features of PP. For self-adjoint PP, eigenvalues in W(P)W(P) are bounded by upper Pellet bounds.

This principled selection of starting points is especially critical for highly clustered spectra and underpins the rapid global convergence witnessed in practice (Cameron et al., 2017).

4. Stopping Criteria, Convergence, and Backward Stability

Iteration on candidate eigenvalues is halted when one of three criteria is met:

  • Small residual: If the smallest diagonal of the QR factorization of P(λ)P(\lambda) falls below τ\tau, convergence is declared. The threshold is set by

τ={αϵ,λ1 α~ϵ,λ>1\tau = \begin{cases} \alpha\,\epsilon, & |\lambda| \leq 1 \ \tilde{\alpha}\,\epsilon, & |\lambda| > 1 \end{cases}

with α=i=0dλiAi\alpha = \sum_{i=0}^d |\lambda|^i \|A_i\|, α~=i=0dρdiAi\tilde{\alpha} = \sum_{i=0}^d |\rho|^{d-i} \|A_i\|.

  • Backward-error bound: If minb0bαP(λ)1b<ϵ\min_{b\neq 0} \|b\|\, \alpha \|P(\lambda)^{-1} b\| < \epsilon (the Tisseur bound), λ\lambda is accurate to precision ϵ\epsilon.
  • Stagnation: If λ^λ<ϵλ|\hat{\lambda} - \lambda| < \epsilon |\lambda|, or iteration reaches its maximum.

For computed λ\lambda meeting either of the first two, the eigenpair is guaranteed backward stable to within O(nϵ)O(n\epsilon). This explicitly quantifies the numerical reliability of the computed spectrum (Cameron et al., 2017).

5. Computation of Eigenvectors and Condition Numbers

Upon convergence of λ\lambda:

  • Right and left eigenvectors are constructed using the QR factorization. If a small pivot exists, the right eigenvector xx is obtained by solving a reduced system, with the left eigenvector yy set to the final column of QQ. Otherwise, inverse iteration is applied to RRR^*R or RRRR^*.
  • The normwise condition number of the simple eigenpair (λ,x,y)(\lambda,x,y) is

κ(λ,P)=αxyλyP(λ)x\kappa(\lambda, P) = \frac{\alpha \|x\|\|y\|}{|\lambda y^*P'(\lambda)x|}

with analogous expressions for zero or infinite eigenvalues. This facilitates sensitivity analysis and quantifies perturbation effects (Cameron et al., 2017).

6. Exploiting Structural Sparsity: Hessenberg and Tridiagonal Cases

Significant computational savings arise when the sequence {Ai}\{A_i\} is structured (Hessenberg or tridiagonal):

  • Hyman's method evaluates detP(λ)\det P(\lambda) and derivatives in O(n2)O(n^2) (Hessenberg) or O(n)O(n) (tridiagonal), rather than O(n3)O(n^3) per evaluation.
  • The approach analyzes the last row of P(λ)P(\lambda), employing forward and adjoint solves to assemble determinant and derivative quantities, delivering log-derivative formulas:

p(λ)p(λ)=b(λ)b(λ)+q(λ)q(λ)\frac{p'(\lambda)}{p(\lambda)} = \frac{b'(\lambda)}{b(\lambda)} + \frac{q'(\lambda)}{q(\lambda)}

This reduces per-iteration cost sharply relative to general dense cases, with tridiagonal complexity scaling as O(dn2+d2n2)O(d n^2 + d^2 n^2). Efficient core operations enable the method’s viability for large-scale or highly structured eigenproblems typical in applied settings (Cameron et al., 2017).

7. Performance, Numerical Experiments, and Scope

Benchmarks confirm:

  • For scalar polynomials (n=1n=1), the method matches or outperforms established algorithms (POLZEROS, AMVW) in speed and accuracy, with O(d2)O(d^2) scaling.
  • For tridiagonal and Hessenberg problems, computational cost corresponds to O(d2n2)O(d^2 n^2) and O(dn3+d2n3)O(dn^3 + d^2 n^3), respectively—substantially reduced from dense cases.
  • Backward errors remain within a few units of machine precision, matching premier solvers (QUADEIG).
  • Detailed forward error plots and tables demonstrate stability and competitiveness across a wide spectrum of eigenvalue problem sizes and structures.

The combination of Laguerre’s method, trace-based evaluation, spectrum-respecting initialization, and efficient structure exploitation yields a unified, backward-stable, and robust computational apparatus for polynomial eigenvalue problems (Cameron et al., 2017).


References

  • Cameron, Steckley. "On the application of Laguerre's method to the polynomial eigenvalue problem" (Cameron et al., 2017).
  • Bini, Noferini. "Numerical methods for polynomial root-finding."
  • Tisseur. "Backward error analysis in matrix polynomials."
  • Plestenjak, Bini et al., "Tridiagonal and Hessenberg solvers for matrix polynomials."
  • AMVW: Aurentz, Mach, Vandebril, Watkins. "Structured Polynomial Eigenvalue Problems."
  • QUADEIG: Aurentz, Mach, Vandebril, Watkins, "Fast and backward stable computation of the eigenvalues and eigenvectors of matrix polynomials."

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Polynomial Eigenvalue Method.