Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 73 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Conditioning and backward error of block-symmetric block-tridiagonal linearizations of matrix polynomials (1706.04150v1)

Published 13 Jun 2017 in math.NA

Abstract: For each square matrix polynomial $P(\lambda)$ of odd degree, a block-symmetric block-tridiagonal pencil $\mathcal{T}{P}(\lambda)$ was introduced by Antoniou and Vologiannidis in 2004, and a variation $\mathcal{R}_P(\lambda)$ was introduced by Mackey et al. in 2010. These two pencils have several appealing properties, namely they are always strong linearizations of $P(\lambda)$, they are easy to construct from the coefficients of $P(\lambda)$, the eigenvectors of $P(\lambda)$ can be recovered easily from those of $\mathcal{T}_P(\lambda)$ and $\mathcal{R}_P(\lambda)$, the two pencils are symmetric (resp. Hermitian) when $P(\lambda)$ is, and they preserve the sign characteristic of $P(\lambda)$ when $P(\lambda)$ is Hermitian. In this paper we study the numerical behavior of $\mathcal{T}{P}(\lambda)$ and $\mathcal{R}P(\lambda)$. We compare the conditioning of a finite, nonzero, simple eigenvalue $\delta$ of $P(\lambda)$, when considered an eigenvalue of $P(\lambda)$ and an eigenvalue of $\mathcal{T}{P}(\lambda)$. We also compare the backward error of an approximate eigenpair $(z,\delta)$ of $\mathcal{T}{P}(\lambda)$ with the backward error of an approximate eigenpair $(x,\delta)$ of $P(\lambda)$, where $x$ was recovered from $z$ in an appropriate way. When the matrix coefficients of $P(\lambda)$ have similar norms and $P(\lambda)$ is scaled so that the largest norm of the matrix coefficients of $P(\lambda)$ is one, we conclude that $\mathcal{T}{P}(\lambda)$ and $\mathcal{R}P(\lambda)$ have good numerical properties in terms of eigenvalue conditioning and backward error. Moreover, we compare the numerical behavior of $\mathcal{T}{P}(\lambda)$ with that of other well-studied linearizations in the literature, and conclude that $\mathcal{T}_{P}(\lambda)$ performs better than these linearizations when $P(\lambda)$ has odd degree and has been scaled.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.