On the rational approximation of Markov functions,with applications to the computation of Markovfunctions of Toeplitz matrices
Published 9 Jun 2021 in math.NA | (2106.05098v2)
Abstract: We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interpolation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
The paper introduces robust error bounds for rational interpolants of Markov functions using orthogonal polynomial techniques.
It derives optimal interpolation points through elliptic and Zolotarev numbers to minimize approximation errors effectively.
It demonstrates improved numerical stability and computational efficiency in evaluating matrix functions of Toeplitz matrices.
Rational Approximation of Markov Functions
The paper explores the domain of rational approximation of Markov functions, specifically focusing on applications to computing Markov functions of symmetric Toeplitz matrices. Markov functions, characterized by their Cauchy transform form involving positive measures over a real interval, are essential in various computational fields such as network analysis and signal processing. This work explores the precision and stability of rational interpolants for these functions, with implications for efficient matrix function computation in large-scale numerical contexts.
Upper Bounds on Approximation Error
The theoretical development presented in the paper centers around deriving robust upper bounds for the relative approximation error in rational interpolants of Markov functions. This is achieved by leveraging mathematical properties of polynomials orthogonal with respect to specific measures derived from the function's representation. The authors introduce a refined error analysis technique that offers clear and optimizable bounds. These bounds effectively quantify the interpolation error on real intervals and the unit disk, establishing a systematic framework for selecting interpolation points to minimize approximation errors effectively.
Optimization of Interpolation Points
A significant contribution of this paper is its derivation of optimal points for rational interpolation. The authors propose a strategy using elliptic and Zolotarev numbers to pinpoint interpolation points that can reduce the approximation error on specified intervals or complex domains. This advancement enables the application of rational approximants in numerical algorithms with heightened precision, particularly relevant for large-scale Toeplitz matrix computations in realistic settings.
Figure 1: Relative L∞ error on the interval [c,d] of rational interpolants of type [m−1∣m] of the Markov function f(z)=1/z.
Numerical Stability and Implementation Techniques
The study meticulously assesses various representation forms for rational interpolants, including partial fraction, barycentric, and Thiele continued fraction forms. Each representation's computational feasibility and stability under finite precision arithmetic are tested. The paper stresses the backward stability in evaluating Thiele interpolating continued fractions, revealing significant insights into efficiency and accuracy gains in matrix computations. Such representations, particularly Thiele continued fractions, exhibit robustness against precision loss and numerical instability.
Application to Toeplitz Matrices
Implementing rational approximation for computing matrix functions in the algebra of Toeplitz matrices yields substantial benefits in computational cost and accuracy. By exploiting the small displacement rank of Toeplitz-like matrices, the approach allows matrix functions evaluation with complexities scaling favorably with matrix dimension, demonstrated as crucial for challenging cases such as large matrices with ill-condition numbers.
Figure 2: Relative errors for approaching f(A) for the Markov function f(z)=log(z)/(z−1), with a symmetric positive definite Toeplitz matrix.
Conclusion
The paper establishes a comprehensive understanding of the intricacies involved in rational approximation of Markov functions and their application to computationally efficient matrix function evaluations. The derived error bounds, optimized interpolation strategies, and detailed implementation guide offer a solid foundation for advancing numerical methods in large-scale matrix computations. Future research may explore further adaptations in non-symmetric Toeplitz matrices or extend these findings to other classes of structured matrices, broadening the scope of practical applicability in numerical analysis.
The theoretical implications and practical algorithms from this work pave the way for enhanced precision and reduced computational load in applications spanning scientific computing, data processing, and beyond. Further exploration of the interplay between theoretical accuracy and practical efficiency holds promise for advancing computational capabilities in processing complex data structures.