RiNNAL-POP: Scalable Algorithm for POP Relaxations
- RiNNAL-POP is a framework that uses low‐rank augmented Lagrangian methods to solve large-scale polyhedral semidefinite and moment–SOS relaxations for polynomial optimization problems.
- It reformulates the relaxation via tailored projection schemes and splitting techniques, unifying SDP, DNN, RLT, and SOS approaches in a conic programming setup.
- Empirical studies show 5×–100× runtime improvements and high solution accuracy (KKT residual <10⁻⁶) across benchmarks with high-dimensional problem instances.
The RiNNAL-POP algorithmic framework is a low-rank augmented Lagrangian method (ALM) designed to solve large-scale polyhedral semidefinite programming (SDP) relaxations and moment–sum-of-squares (SOS) relaxations of polynomial optimization problems (POPs). By exploiting low-rank factorization, tailored projection schemes, and hidden facial structures in the conic relaxations, RiNNAL-POP achieves improved scalability and solution accuracy for high-dimensional and highly constrained POP instances, substantially outperforming prior state-of-the-art solvers on benchmark problems (Hou et al., 6 Dec 2025).
1. Problem Formulation and Polyhedral–SDP Relaxation
Consider the general POP of the form
where is a conic feasibility domain and each is a real multivariate polynomial. The relaxation process proceeds in two standard steps: homogenization and lifting.
- Homogenization: For given even order , set , , and define the degree- homogenization
- Lifting: Let index all degree- monomials, and define as the vector of these monomials. The key lifting variable is .
The canonical polyhedral–SDP relaxation seeks
where is a polyhedral cone (e.g., entrywise nonnegativity for DNN relaxations), and enforces the consistency constraints whenever . The relaxation unifies various standard hierarchies—standard SDP, diagonally dominant (DNN), RLT, and SOS—under a general conic program (Hou et al., 6 Dec 2025).
2. Augmented Lagrangian Splitting and Algorithmic Structure
The polyhedral–SDP relaxation is reformulated in splitting form over primal variables : where , , and encodes linear equality constraints. The augmented Lagrangian is
parametrized by dual variables and penalty .
The variable is eliminated via proximal mappings, and each ALM iteration centers on the minimization of a convex function over :
3. Low-rank Algorithmic Steps and Projection Schemes
RiNNAL-POP employs a hybrid two-phase strategy in every ALM subproblem:
- Low-rank phase: The primal matrix is factorized as with , reducing the number of unknowns and constraints from to . The nonconvex subproblem
is addressed via projected gradient steps on the manifold .
- Convex-lifting phase: Once progress in the low-rank objective stalls or the rank is insufficient, a single projected gradient step is performed on in the original convex feasible set: where . This corrects for infeasibility, escapes spurious stationary points, and automatically updates the factorization rank via eigendecomposition of .
The projection onto uses the closed form
Projection onto the polyhedral set (enforcing possible constraints) leverages
where is an averaging operator over "index-sum" classes to enforce consistency and normalization, and applies entrywise nonnegativity, reducing cost to linear in the size of .
4. Exploiting Facial Structures and Dual Certificate Recovery
Facial reduction is systematically applied by considering the exposed faces of the semidefinite cone defined by and . Any feasible point admits the representation
where rows of span . Restricting to this subspace sustains feasibility and tightens the relaxation.
Dual certificate recovery for KKT optimality is achieved by
which satisfies for the computed , ensuring complementarity and obviating the need for solving large linear systems beyond the initial inversion of .
5. Extension to Moment–Sum-of-Squares Hierarchies
The RiNNAL-POP framework generalizes to moment–SOS relaxations, such as the Lasserre hierarchy, by casting these relaxations in the same splitting form:
- Moment matrices with index set ;
- Consistency via ;
- Constraints represented via localizing matrices , and additional auxiliary variables .
The ALM subproblem then includes one low-rank/convex-lifting phase per matrix block, and projections are extended accordingly, maintaining efficiency and scalability for large-scale moment–SOS relaxations (Hou et al., 6 Dec 2025).
6. Theoretical Guarantees: Convergence and Complexity
Rigorous theoretical results for the ALM under the RiNNAL-POP framework are established:
- Global ALM convergence: With mild boundedness and Slater conditions, the iterates converge to a KKT point of the polyhedral–SDP problem, even with inexact subproblem solutions.
- Partial-smoothness property: The indicator is partly smooth relative to its manifold, aiding local analysis and convergence.
- Finite-step rank identification: Under a nondegeneracy condition, the algorithm identifies the rank of solution matrices in finite steps.
- Complexity: Each ALM iteration costs for first-order updates and one eigendecomposition, with effective practical scaling approaching linearity in the number of nonzero constraints for moderate .
7. Empirical Performance and Practical Implementation
Extensive numerical experiments on benchmark POPs—including StQP, BIQ, MBP, MQKP, BQM, KM, matrix/tensor copositivity, and nonnegative tensor factorization—demonstrate empirical superiority to SDPNAL+, with typical runtime improvements of 5×–100×, recovery of low-rank solutions, and high solution accuracy () for dimensions up to () and ().
Empirically recommended hyperparameters include:
- Initial penalty , adapting if primal residuals greatly exceed dual;
- Initial factorization rank ;
- Barzilai–Borwein steps and nonmonotone line search in the low-rank phase;
- Projected-gradient stepsize , commonly , in the convex phase;
- Early termination of the low-rank phase upon objective stalling, followed by a single convex-lifting correction.
Collectively, these methodological and computational advances yield a robust, scalable framework for the solution of large-scale polyhedral–SDP and moment–SOS relaxations in polynomial optimization (Hou et al., 6 Dec 2025).