Papers
Topics
Authors
Recent
2000 character limit reached

Polar Decomposition Algorithms

Updated 26 November 2025
  • Polar Decomposition Algorithms are techniques to factor a matrix into a unitary (or isometric) matrix and a positive semidefinite matrix, crucial for optimal low-rank approximations.
  • They merge classical methods like SVD, Newton, and QDWH iterations with structure-preserving adaptations for Lie groups and manifold settings.
  • Advanced implementations extend to quantum algorithms and Riemannian optimization, enabling efficient tensor approximations and high-performance computations.

The polar decomposition expresses a matrix as the product of an isometry (or orthogonal/unitary) and a positive semidefinite matrix. Algorithmic frameworks for computing the polar decomposition span classical, Lie group, generalized, and quantum linear algebra settings, with applications in numerical analysis, optimization, tensor methods, and quantum information science. Recent research has yielded robust, backward-stable, and structure-preserving algorithms along with complete convergence analyses and extensions to group and manifold contexts, as well as to quantum circuit implementations.

1. Core Definitions and Classical Algorithmic Principles

Given AKm×nA \in \mathbb{K}^{m \times n} (K\mathbb{K} is R\mathbb{R} or C\mathbb{C}), the standard polar decomposition is

A=UH,A = U H,

where UU is a partial isometry (UU=InU^* U = I_n, UU is unitary if AA is full rank), and H=(AA)1/2H= (A^*A)^{1/2} is positive semidefinite and self-adjoint. This decomposition is intimately related to the SVD (A=WΣVA = W \Sigma V^*), from which U=WVU=W V^* and H=VΣVH=V \Sigma V^*. The polar decomposition yields optimal low-rank approximations and is uniquely defined when AA has full column rank (Benner et al., 2021).

Generalizations include the (M,N)(M, N)-polar decomposition, where M,NM,N are nonsingular "inner-product" matrices. The canonical generalized polar decomposition (GPD) A=WSA=WS features a partial (M,N)(M,N)-isometry WW and NN-self-adjoint positive SS (Benner et al., 2021).

2. Iterative and Direct Algorithms in Matrix and Group Contexts

Classical algorithms include SVD-based methods, Newton iterations, and Padé- or Zolotarev-rational-iteration schemes. Newton’s iteration is

Uk+1=12(Uk+UkT),U_{k+1} = \frac{1}{2}\left(U_k + U_k^{-T}\right),

converging quadratically to the orthogonal factor for invertible AA (Shen et al., 2022). The dynamically weighted Halley (DWH) iteration generalizes Padé-type matrix sign iterations, using adaptive rational maps for globally cubic convergence in the positive definite case (Benner et al., 2021).

Modern algorithms replace explicit inverses by QR (QDWH), Cholesky, or, in the indefinite-inner-product case, hyperbolic QR factorizations. The QDWH method, which forms the basis for highly stable and parallelizable polar algorithms, is written as

Xk+1=bkckXk+(akbkck)Q1Q2,X_{k+1} = \frac{b_k}{c_k} X_k + \left(a_k - \frac{b_k}{c_k}\right) Q_1 Q_2^*,

where ak,bk,cka_k, b_k, c_k are iteration-specific scalars and the QR decomposition is applied to [ckXk;I][\sqrt{c_k}\,X_k; I] (Benner et al., 2021).

For group settings, such as SO(n)SO(n) and indefinite orthogonal or symplectic groups, constructive algorithms based on 2×22\times 2 block reductions, quaternionic parametrizations, or double covers are available, avoiding large matrix eigenproblems (Adjei et al., 2018).

3. Manifold Optimization and Tensor Approximation

On product manifolds of Stiefel manifolds

Ω=St(n1,r1)××St(nd,rd),\Omega = \mathrm{St}(n_1, r_1) \times \ldots \times \mathrm{St}(n_d, r_d),

polar decomposition enters as a building block for block-coordinate optimization. The Alternating Polar-Decomposition Iteration (APDOI) cyclically updates factor blocks by polar projection of the Euclidean gradient: Uk(i)=Polar(if(,U(i1),Uk1(i),Uk1(i+1),)).U_k^{(i)} = \mathrm{Polar}\left( \nabla_{i} f\left( \ldots, U^{(i-1)}, U_{k-1}^{(i)}, U_{k-1}^{(i+1)}, \ldots \right) \right). The symmetric variant (PDOI) operates on single Stiefel blocks for symmetric tensor approximation (Li et al., 2019).

These frameworks encompass and generalize LROAT (low-rank orthogonal approximation), HOPM (higher-order power method), HOOI (higher-order orthogonal iteration), S-HOPM, and S-LROAT. Convergence theory establishes weak, global, and—under Morse–Bott conditions—linear convergence via the Łojasiewicz gradient inequality: f(x)f(x)1ζσgradf(x),ζ(0,1/2].|f(x) - f(x^*)|^{1-\zeta} \leq \sigma \| \mathrm{grad}\,f(x) \|, \quad \zeta \in (0, 1/2]. (Li et al., 2019).

4. Riemannian and Geometric Optimization for Polar Decomposition

Computing the polar factor can be framed as an optimization problem on O(n)O(n) or SO(n)SO(n), such as the orthogonal Procrustes problem: minXO(n)AXBF2.\min_{X\in O(n)} \|AX - B\|_F^2. The Riemannian gradient-descent algorithm uses the exponential map for updates on the manifold: Xt+1=Xtexp(ηtΩXt),ΩXt=skew(XtTCT),X_{t+1} = X_t\,\exp(-\eta_t\,\Omega_{X_t}),\quad \Omega_{X_t} = \mathrm{skew}(X_t^T C^T), and exhibits linear convergence when CC is nonsingular and algebraic convergence when singular, due to the problem’s weak-quasi-strong-convexity property (Alimisis et al., 18 Dec 2024).

In Lie group variational integrators, instead of retraction via the exponential map (which mandates evaluation of higher-order derivatives of exp\exp), projection back to SO(n)SO(n) is performed using the polar decomposition, yielding group-preserving and symplectic structure (Shen et al., 2022).

5. Quantum Algorithms for Polar Decomposition

Quantum algorithms approximate polar factors via block-encodings and quantum singular value transformation (QSVT). In this framework, preparing a block-encoding of AA and applying a polynomial singular-value transform enables the implementation of

Upolar(A)=WVU_\mathrm{polar}(A) = W V^*

on quantum registers, with query complexity O(κlog(1/ε))O(\kappa \log(1/\varepsilon)) for condition number κ\kappa and target precision ε\varepsilon (Quek et al., 2021).

An alternative exploits simulation of H=[0A A0]H=\begin{bmatrix}0 & A^\dagger \ A & 0\end{bmatrix}:

  • Phase estimation extracts eigencomponents,
  • Controlled-phase rotations enact the sign or absolute value transform,
  • Ancilla manipulation yields the unitary/isometry or positive factors (Lloyd et al., 2020).

These quantum routines exhibit exponential improvement in precision and polynomial speed-up in condition number over density-matrix-exponentiation-based methods and underpin quantum versions of the Procrustes problem and pretty good measurements (Quek et al., 2021).

6. Stability, Parallelization, and Decomposition Frameworks

Backward-stable algorithms for polar decomposition are crucial for the CS decomposition (cosine-sine decomposition) and are achieved by leveraging parallelizable polar and eigendecomposition subroutines, especially those using Zolotarev rational approximations to the sign function. In the CS case, two independent polar decompositions on submatrices A1A_1, A2A_2 are followed by an eigendecomposition of a Hermitian auxiliary matrix B=H2H1+μ(IAA)B=H_2-H_1+\mu(I-A^*A) (Gawlik et al., 2018).

Generalized polar decomposition is stabilized further through methods such as CholeskyQR2 (two-stage LDLTLDL^T factorization), and even more so by employing permuted graph bases, achieving residuals O(1014)O(10^{-14}) uniformly up to condition numbers 101510^{15} (Benner et al., 2021). QR-based versions circumvent the need for explicit matrix inversion, and hyperbolic QR extends the framework to indefinite inner products, essential for polar decompositions over groups preserving bilinear forms (Adjei et al., 2018).

<table> <thead> <tr> <th>Algorithmic Approach</th> <th>Complexity (per iter.)</th> <th>Stability/Residuals</th> </tr> </thead> <tbody> <tr> <td>QDWH/QR-based (Std.)</td> <td>O(n3)O(n^3)</td> <td>Good, 1012\sim 10^{-12}</td> </tr> <tr> <td>CholeskyQR2 (LDL)</td> <td>(1/3)n3(1/3)n^3</td> <td>Robust to κ1012\kappa \leq 10^{12}</td> </tr> <tr> <td>Permuted Graph Bases</td> <td>O(n3logn)O(n^3\log n)</td> <td>Excellent, <1014<10^{-14} up to κ=1015\kappa=10^{15}</td> </tr> </tbody> </table>

7. Specialized and Group-Intrinsic Polar Decomposition Algorithms

For real 4×44\times 4 groups preserving a signature matrix MM, polar decomposition can be implemented using only 2×22\times 2 manipulations. For M=I2,2M=I_{2,2} (neutral signature), the procedure consists of Cholesky factorization, 2×22\times 2 rotation, and block matrix assembly—no eigendecomposition is required. The Lorentz group SO+(1,3)SO^+(1,3) uses the SL(2,C)SL(2,\mathbb{C}) double cover; the symmetric and positive-definite matrix is parametrized via Hermitian 2×22\times 2 matrices (Adjei et al., 2018). This approach ensures explicit, constructive, and component-wise algorithms for all matrix groups of interest.


In summary, polar decomposition algorithms form a highly developed branch, spanning matrix analysis, Riemannian and Lie group optimization, high-order structure-preserving integrators, quantum algorithms, and applications across tensor and group settings. Recent advances have delivered convergence guarantees, backward stability, and algorithmic parallelizability, while group-specific and quantum schemes extend applicability to a breadth of algebraic and computational frameworks (Li et al., 2019, Alimisis et al., 18 Dec 2024, Quek et al., 2021, Adjei et al., 2018, Lloyd et al., 2020, Gawlik et al., 2018, Shen et al., 2022, Benner et al., 2021).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Polar Decomposition Algorithms.