Dual Riemannian ADMM for Low-Rank SDPs
- The paper introduces ManiDSDP, a dual ADMM method that reformulates low-rank SDP problems using a Riemannian optimization framework on the oblique manifold.
- It combines the global convergence of dual ADMM with efficient Riemannian trust-region subproblems, using Burer–Monteiro factorization to handle unit diagonal constraints.
- Numerical experiments show ManiDSDP achieves high accuracy with significantly fewer iterations and improved scalability compared to traditional SDP solvers.
The dual Riemannian alternating direction method of multipliers (ADMM), as developed in "A Dual Riemannian ADMM Algorithm for Low-Rank SDPs with Unit Diagonal" (Wang et al., 4 Dec 2025), is a specialized first-order optimization algorithm for low-rank semidefinite programs (SDPs) under unit diagonal constraints. By recasting the core ADMM subproblem as Riemannian optimization over the oblique manifold via Burer–Monteiro factorization, this method—denoted ManiDSDP—combines the global convergence guarantees of dual ADMM with the empirical scalability and memory efficiency of nonlinear manifold-based search. The algorithm achieves state-of-the-art practical performance versus established SDP solvers on relaxations of both dense and sparse binary quadratic programs.
1. Problem Formulation
The target class comprises SDPs with unit diagonal and low-rank structure, typically formulated in the dual form (DSDP):
where is a linear map with adjoint , , and are problem data. Strong duality is assumed, with the primal-dual (PSDP) and an alternative reformulation using :
This dual formulation admits tight, tractable relaxations of combinatorial problems, such as second-order relaxations for binary quadratic programs (BQPs).
2. Burer–Monteiro Factorization and the Oblique Manifold
To leverage solution low-rankness, the algorithm parameterizes as with , . The unit diagonal constraint implies lies on the oblique manifold:
The tangent space at is , with induced Riemannian metric . The orthogonal projection is .
This factorization converts positive semidefiniteness and unit-diagonal constraints into a nonlinear manifold constraint, for which Riemannian optimization techniques are natural.
3. Augmented Lagrangian and Dual ADMM Decomposition
The ManiDSDP approach constructs an augmented Lagrangian in primal-dual variables :
ADMM alternates:
- S-update: , where .
- y-update (closed form): .
- Multiplier update: .
Dual variables are recovered as , .
4. Riemannian Subproblem and Trust-Region Solver
The S-update is restated over via , yielding
This subproblem is solved with a Riemannian trust-region method. Let , and . The Riemannian gradient is $2 X Y$ and the projection formula above applies.
Second-order optimality checks are performed during the subsolve: the trust-region iteration stops when , , with , , . On encountering negative curvature, eigendirections of are used to augment and escape saddles.
The overall ManiDSDP (outer ADMM) algorithm includes adaptation of rank , penalty , and an optional saddle-escape step.
5. Theoretical Guarantees
Convergence analysis (Theorem 6.1 of (Wang et al., 4 Dec 2025)) establishes that, under the inexactness schedules , and bounded multipliers, every cluster point of ManiDSDP iterates satisfies the KKT conditions of the dual SDP. The core analytical lemmas link descent of the augmented Lagrangian and reduction of the feasibility residual , leading to and , with asymptotic positivity and slackness properties for .
A plausible implication is that, provided the Riemannian subproblems are solved to sufficiently high accuracy, the algorithm approaches optimality even in the presence of the nonconvex factorization.
6. Computational Cost and Scalability
The dominant per-iteration costs are as follows:
- Forming : .
- Riemannian trust-region solve: per step, cost is for gradient and Hessian-vector products. Since , total cost per outer iteration is .
Typically, the number of trust-region steps per ADMM iteration is in the tens. This cost structure is highly favorable for large, low-rank problems.
7. Numerical Results and Algorithmic Phenomena
Comprehensive experiments benchmark ManiDSDP against MOSEK, COPT, SDPNAL+, and ManiSDP on second-order SDP relaxations of dense and sparse BQPs. For dense BQPs of size up to , ManiDSDP completes in – s, outperforming MOSEK (out-of-memory), COPT (numerical issues), SDPNAL+ (time s), and ManiSDP (twice larger time). On sparse BQPs (), ManiDSDP completes in s, while other solvers often time out.
ManiDSDP achieves – accuracy in a few tens of outer ADMM iterations, compared to iterations for traditional ADMM. Maximal factorization sizes and outer iteration counts are lower than for ManiSDP.
A notable “residue-diving” phenomenon is observed: the KKT residual log can drop abruptly from to in a single ADMM step. This suggests a rapid improvement in optimality and feasibility at late stages.
The overall method is shown to be both memory- and time-efficient, scalable, and competitive for challenging SDPs with unit diagonal structure (Wang et al., 4 Dec 2025).