Rational minimax approximation of matrix-valued functions (2508.06378v1)
Abstract: In this paper, we present a rigorous framework for rational minimax approximation of matrix-valued functions that generalizes classical scalar approximation theory. Given sampled data ${(x_\ell, {F}(x_\ell))}{\ell=1}m$ where ${F}:\mathbb{C} \to \mathbb{C}{s \times t}$ is a matrix-valued function, we study the problem of finding a matrix-valued rational approximant ${R}(x) = {P}(x)/q(x)$ (with ${P}:\mathbb{C} \to \mathbb{C}{s \times t}$ a matrix-valued polynomial and $q(x)$ a nonzero scalar polynomial of prescribed degrees) that minimizes the worst-case Frobenius norm error over the given nodes: $$ \inf{{R}(x) = {P}(x)/q(x)} \max_{1 \leq \ell \leq m} |{F}(x_\ell) - {R}(x_\ell)|_{\rm F}. $$ By reformulating this min-max optimization problem through Lagrangian duality, we derive a maximization dual problem over the probability simplex. We analyze weak and strong duality properties and establish a sufficient condition ensuring that the solution of the dual problem yields the minimax approximant $R(x)$. For numerical implementation, we propose an efficient method (\textsf{m-d-Lawson}) to solve the dual problem, generalizing Lawson's iteration to matrix-valued functions. Numerical experiments are conducted and compared to state-of-the-art approaches, demonstrating its efficiency as a novel computational framework for matrix-valued rational approximation.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.