Polar Decomposition Algorithms
- Polar Decomposition Algorithms are techniques to factor a matrix into a unitary (or isometric) matrix and a positive semidefinite matrix, crucial for optimal low-rank approximations.
- They merge classical methods like SVD, Newton, and QDWH iterations with structure-preserving adaptations for Lie groups and manifold settings.
- Advanced implementations extend to quantum algorithms and Riemannian optimization, enabling efficient tensor approximations and high-performance computations.
The polar decomposition expresses a matrix as the product of an isometry (or orthogonal/unitary) and a positive semidefinite matrix. Algorithmic frameworks for computing the polar decomposition span classical, Lie group, generalized, and quantum linear algebra settings, with applications in numerical analysis, optimization, tensor methods, and quantum information science. Recent research has yielded robust, backward-stable, and structure-preserving algorithms along with complete convergence analyses and extensions to group and manifold contexts, as well as to quantum circuit implementations.
1. Core Definitions and Classical Algorithmic Principles
Given ( is or ), the standard polar decomposition is
where is a partial isometry (, is unitary if is full rank), and is positive semidefinite and self-adjoint. This decomposition is intimately related to the SVD (), from which and . The polar decomposition yields optimal low-rank approximations and is uniquely defined when has full column rank (Benner et al., 2021).
Generalizations include the -polar decomposition, where are nonsingular "inner-product" matrices. The canonical generalized polar decomposition (GPD) features a partial -isometry and -self-adjoint positive (Benner et al., 2021).
2. Iterative and Direct Algorithms in Matrix and Group Contexts
Classical algorithms include SVD-based methods, Newton iterations, and Padé- or Zolotarev-rational-iteration schemes. Newton’s iteration is
converging quadratically to the orthogonal factor for invertible (Shen et al., 2022). The dynamically weighted Halley (DWH) iteration generalizes Padé-type matrix sign iterations, using adaptive rational maps for globally cubic convergence in the positive definite case (Benner et al., 2021).
Modern algorithms replace explicit inverses by QR (QDWH), Cholesky, or, in the indefinite-inner-product case, hyperbolic QR factorizations. The QDWH method, which forms the basis for highly stable and parallelizable polar algorithms, is written as
where are iteration-specific scalars and the QR decomposition is applied to (Benner et al., 2021).
For group settings, such as and indefinite orthogonal or symplectic groups, constructive algorithms based on block reductions, quaternionic parametrizations, or double covers are available, avoiding large matrix eigenproblems (Adjei et al., 2018).
3. Manifold Optimization and Tensor Approximation
On product manifolds of Stiefel manifolds
polar decomposition enters as a building block for block-coordinate optimization. The Alternating Polar-Decomposition Iteration (APDOI) cyclically updates factor blocks by polar projection of the Euclidean gradient: The symmetric variant (PDOI) operates on single Stiefel blocks for symmetric tensor approximation (Li et al., 2019).
These frameworks encompass and generalize LROAT (low-rank orthogonal approximation), HOPM (higher-order power method), HOOI (higher-order orthogonal iteration), S-HOPM, and S-LROAT. Convergence theory establishes weak, global, and—under Morse–Bott conditions—linear convergence via the Łojasiewicz gradient inequality: (Li et al., 2019).
4. Riemannian and Geometric Optimization for Polar Decomposition
Computing the polar factor can be framed as an optimization problem on or , such as the orthogonal Procrustes problem: The Riemannian gradient-descent algorithm uses the exponential map for updates on the manifold: and exhibits linear convergence when is nonsingular and algebraic convergence when singular, due to the problem’s weak-quasi-strong-convexity property (Alimisis et al., 18 Dec 2024).
In Lie group variational integrators, instead of retraction via the exponential map (which mandates evaluation of higher-order derivatives of ), projection back to is performed using the polar decomposition, yielding group-preserving and symplectic structure (Shen et al., 2022).
5. Quantum Algorithms for Polar Decomposition
Quantum algorithms approximate polar factors via block-encodings and quantum singular value transformation (QSVT). In this framework, preparing a block-encoding of and applying a polynomial singular-value transform enables the implementation of
on quantum registers, with query complexity for condition number and target precision (Quek et al., 2021).
An alternative exploits simulation of :
- Phase estimation extracts eigencomponents,
- Controlled-phase rotations enact the sign or absolute value transform,
- Ancilla manipulation yields the unitary/isometry or positive factors (Lloyd et al., 2020).
These quantum routines exhibit exponential improvement in precision and polynomial speed-up in condition number over density-matrix-exponentiation-based methods and underpin quantum versions of the Procrustes problem and pretty good measurements (Quek et al., 2021).
6. Stability, Parallelization, and Decomposition Frameworks
Backward-stable algorithms for polar decomposition are crucial for the CS decomposition (cosine-sine decomposition) and are achieved by leveraging parallelizable polar and eigendecomposition subroutines, especially those using Zolotarev rational approximations to the sign function. In the CS case, two independent polar decompositions on submatrices , are followed by an eigendecomposition of a Hermitian auxiliary matrix (Gawlik et al., 2018).
Generalized polar decomposition is stabilized further through methods such as CholeskyQR2 (two-stage factorization), and even more so by employing permuted graph bases, achieving residuals uniformly up to condition numbers (Benner et al., 2021). QR-based versions circumvent the need for explicit matrix inversion, and hyperbolic QR extends the framework to indefinite inner products, essential for polar decompositions over groups preserving bilinear forms (Adjei et al., 2018).
<table> <thead> <tr> <th>Algorithmic Approach</th> <th>Complexity (per iter.)</th> <th>Stability/Residuals</th> </tr> </thead> <tbody> <tr> <td>QDWH/QR-based (Std.)</td> <td></td> <td>Good, </td> </tr> <tr> <td>CholeskyQR2 (LDL)</td> <td></td> <td>Robust to </td> </tr> <tr> <td>Permuted Graph Bases</td> <td></td> <td>Excellent, up to </td> </tr> </tbody> </table>
7. Specialized and Group-Intrinsic Polar Decomposition Algorithms
For real groups preserving a signature matrix , polar decomposition can be implemented using only manipulations. For (neutral signature), the procedure consists of Cholesky factorization, rotation, and block matrix assembly—no eigendecomposition is required. The Lorentz group uses the double cover; the symmetric and positive-definite matrix is parametrized via Hermitian matrices (Adjei et al., 2018). This approach ensures explicit, constructive, and component-wise algorithms for all matrix groups of interest.
In summary, polar decomposition algorithms form a highly developed branch, spanning matrix analysis, Riemannian and Lie group optimization, high-order structure-preserving integrators, quantum algorithms, and applications across tensor and group settings. Recent advances have delivered convergence guarantees, backward stability, and algorithmic parallelizability, while group-specific and quantum schemes extend applicability to a breadth of algebraic and computational frameworks (Li et al., 2019, Alimisis et al., 18 Dec 2024, Quek et al., 2021, Adjei et al., 2018, Lloyd et al., 2020, Gawlik et al., 2018, Shen et al., 2022, Benner et al., 2021).