Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scaled Fixed Point Algorithm for Computing the Matrix Square Root (2002.08471v1)

Published 18 Feb 2020 in math.NA and cs.NA

Abstract: This paper addresses the numerical solution of the matrix square root problem. Two fixed point iterations are proposed by rearranging the nonlinear matrix equation $A - X2 = 0$ and incorporating a positive scaling parameter. The proposals only need to compute one matrix inverse and at most two matrix multiplications per iteration. A global convergence result is established. The numerical comparisons versus some existing methods from the literature, on several test problems, demonstrate the efficiency and effectiveness of our proposals.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Harry F. Oviedo (2 papers)
  2. Hugo J. Lara (1 paper)
  3. Oscar S. Dalmau (1 paper)
Citations (1)

Summary

  • The paper introduces two novel fixed-point iterations using a scaling parameter to efficiently compute the matrix square root.
  • It achieves global convergence under certain conditions, ensuring robust performance in solving the nonlinear matrix equation.
  • Numerical comparisons demonstrate that the methods require minimal costly operations and outperform traditional approaches in efficiency.

The paper "Scaled Fixed Point Algorithm for Computing the Matrix Square Root" addresses an important problem in numerical linear algebra: finding efficient and reliable methods for computing the matrix square root. Specifically, this work introduces two novel fixed-point iterations to solve the nonlinear matrix equation AX2=0A - X^2 = 0.

Key Contributions:

  1. Fixed Point Iterations:
    • The authors propose two iterative algorithms derived from rearranging the matrix equation and incorporating a positive scaling parameter. This approach simplifies the computation by requiring only one matrix inverse and a maximum of two matrix multiplications per iteration.
  2. Algorithm Efficiency:
    • The proposed methods are designed to be computationally efficient, reducing the complex operations typically required in computing matrix square roots. The need for only minimal costly operations per iteration is highlighted as a significant advantage over traditional methods.
  3. Global Convergence:
    • Theoretical results about the algorithms' convergence properties are established in the paper. It ensures that the iterations will converge globally under certain conditions, providing robustness to the method.
  4. Numerical Comparisons:
    • The paper conducts extensive numerical experiments comparing these new algorithms against existing methods in the literature. The results demonstrate that the proposed methods are not only efficient but also effective across various test problems, showcasing improved performance metrics.

Significance and Impact:

This paper contributes significantly to solving a classical problem in computational mathematics—computing matrix square roots efficiently. By reducing the computational burden typical of such operations and ensuring global convergence, the proposed methods offer a valuable tool for applications in control theory, statistics, and beyond where matrix computations are prevalent.

Overall, the work is a notable advancement in the field, providing practical and theoretically sound methods that can be applied in numerous real-world scenarios requiring matrix square root calculations.