Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

RiNNAL+: Efficient Riemannian ALM for MBQPs

Updated 25 July 2025
  • RiNNAL+ is a Riemannian augmented Lagrangian method that solves mixed-binary quadratic programs through DNN and SDP-RLT relaxations, ensuring tight bounds and equivalence guarantees.
  • It combines low-rank matrix factorization with a hybrid Riemannian-projected gradient approach to accelerate convergence and automate rank adaptation.
  • Experimental results indicate RiNNAL+ achieves high accuracy and reduces computation time by up to 100-fold compared to traditional solvers on large-scale optimization instances.

RiNNAL+ is a Riemannian augmented Lagrangian method (ALM) specifically designed to efficiently solve large-scale semidefinite relaxations of mixed-binary quadratic programs (MBQPs) via doubly nonnegative (DNN) and SDP-RLT (semidefinite programming–reformulation-linearization technique) relaxations. It introduces a theoretically grounded and computationally efficient algorithmic framework that blends low-rank matrix factorization with convex projection methods, yielding significant improvements in speed and robustness over traditional solvers for a broad class of non-convex, combinatorial optimization problems (Hou et al., 18 Jul 2025).

1. Problem Setting and Theoretical Foundations

RiNNAL+ targets the solution of MBQPs, a class of optimization problems where variables are partitioned into real and binary components, and the objective, as well as constraints, can be quadratic and mixed-integer. DNN relaxations often provide tight lower bounds for MBQPs but entail solving semidefinite programs (SDPs) of high dimensionality: for a problem with nn variables and ll inequality constraints, the DNN matrix dimension is Ω((n+l)2)\Omega((n+l)^2), making scalability a central concern (Hou et al., 18 Jul 2025).

A principal theoretical result underlying RiNNAL+ is the proof of exact equivalence between the DNN relaxation and the SDP-RLT relaxation for MBQPs. Specifically, using a linear bijective mapping Φ:Sn+1Sn+l+1\Phi: S^{n+1} \rightarrow S^{n+l+1} (detailed in the source), the feasible sets and objective values of the two relaxations are shown to coincide: v(SHOR)v(DNN)=v(SDP-RLT)v(P1)=v(P2)v^{(\text{SHOR})} \leq v^{(\text{DNN})} = v^{(\text{SDP-RLT})} \leq v^{(P_1)} = v^{(P_2)} Moreover, with this mapping, the computationally more tractable SDP-RLT relaxation (with matrix dimension n+1n+1) can be used in place of the higher-dimensional DNN formulation, without loss of optimality (Hou et al., 18 Jul 2025).

2. Algorithmic Architecture: Riemannian ALM with Hybrid Descent

RiNNAL+ employs an augmented Lagrangian method on the semidefinite matrix variable YY, leveraging low-rank (Burer–Monteiro) factorization: Y=[e1 R][e1R]Y = \begin{bmatrix} e_1^\top \ R \end{bmatrix} \begin{bmatrix} e_1 & R^\top \end{bmatrix} with RRn×rR \in \mathbb{R}^{n \times r} and e1e_1 the first canonical basis vector. This construction restricts the search space to a low-rank manifold: Mr={RRn×r:AR=be1,diagB(RR)=RBe1}\mathcal{M}_r = \{R \in \mathbb{R}^{n \times r} : A R = b e_1^\top,\, \text{diag}_B(R R^\top) = R_B e_1 \} The optimization proceeds as follows:

Low-Rank Phase:

Minimize the ALM subproblem with respect to RR over Mr\mathcal{M}_r using a Riemannian gradient descent algorithm enhanced by Barzilai–Borwein step sizes and non-monotone line search. The Riemannian gradient is given as: f(r)(R)=2I^[CB(λ+(R))C(μ+(R))]R\nabla f_{(r)}(R) = 2\, \hat{I} \left[ C - B^*(\lambda^+(R)) - C^*(\mu^+(R)) \right] R where λ+(R)\lambda^+(R) and μ+(R)\mu^+(R) are penalty-modified dual multipliers calculated at the current RR.

Convex Lifting Phase:

When progress stalls in the low-rank phase, a single projected gradient (PG) step is taken in the full semidefinite space: Ynew=ΠFS+n+1(YttI(Yt))Y^{\text{new}} = \Pi_{F \cap S_+^{n+1}} \left( Y_t - t \nabla I(Y_t) \right) with step size tt (often 1/σ1/\sigma), and Π\Pi the projection onto the intersection of the feasible set and the cone of positive semidefinite matrices. This step is computed by exploiting a variable transformation and a semismooth Newton-CG method for efficient projection.

RiNNAL+ alternates between these two phases, benefiting from the computational speed of the low-rank Riemannian descent, while using the convex PG phase to automatically increase rank and overcome saddle points.

3. Acceleration Techniques: Preprocessing and Warm-Starts

To further improve efficiency, RiNNAL+ introduces a preprocessing step through an invertible transformation matrix K=[1(1/2)e 0(1/2)In]K = \begin{bmatrix} 1 & (1/2)e^\top \ 0 & (1/2)I_n \end{bmatrix}, mapping the ALM variable YY to a new basis where equality constraints become diagonal: Y~=(K)1YK1\tilde{Y} = (K^\top)^{-1} Y K^{-1} This transformation simplifies the structure of the projection, reducing the cost of the PG phase. The dual variables (y,y0)(y, y_0) for the projection problem are warm-started using multipliers (α,μ,β)(\alpha, \mu, \beta) from the KKT system of the low-rank phase, specifically: y0=tβ,y=t[α14μk; 14μ]y_0 = t\beta, \qquad y = t \Big[ \alpha - \frac{1}{4}\sum \mu_k ;\ \frac{1}{4}\mu \Big] This strategy significantly reduces the number of semismooth Newton iterations required.

4. Comparative Performance and Robustness

Experimental results demonstrate that RiNNAL+ achieves state-of-the-art speed and robust convergence for a variety of test problems:

  • Binary integer quadratic (BIQ), including strengthened BIQ formulations
  • Maximum stable set, quadratic knapsack, cardinality-constrained clustering (ccMSSC), sparse quadratic, and quadratic minimum spanning tree problems

For example, RiNNAL+ attains high-accuracy solutions to BIQ instances with n=5000n=5000 in approximately 18 minutes, while the baseline solver SDPNAL+ fails to converge within one hour under the same conditions. On strengthened BIQ or QKP tasks, RiNNAL+ is 40–100 times faster than traditional ADMM/ALM-type methods (Hou et al., 18 Jul 2025). Automatic rank adaptation via the hybrid scheme relieves the need for heuristic or manual tuning, which is typical in classical rank-adaptive approaches.

5. Automation of Rank Adaptation and Escape from Saddle Points

Traditional low-rank or rank-adaptive methods require elaborate rules or parameters to determine rank adjustment, which can lead to inefficiency or stalling at suboptimal points. The RiNNAL+ hybrid method bypasses this requirement: the projected gradient step directly "lifts" rank when necessary, without extensive parameter tuning. This approach also mitigates the risk of getting trapped in undesirable saddle points that are common in nonconvex factorization schemes. Performance profiles in the literature confirm that RiNNAL+ frequently outperforms SDPNAL+ and previous solvers, running 40–180 times faster while maintaining high solution quality on various benchmarks.

6. Summary and Contextual Significance

RiNNAL+ marks an overview of theoretical advances and practical algorithm design for semidefinite optimization in the context of MBQPs. Its use of low-rank factorization, hybrid Riemannian–projected gradient methods, preprocessing, and warm-starting collectively address the principal challenges of scaling, rank adaptation, and computational cost inherent to large-scale DNN and SDP-RLT relaxations. By proving the equivalence of DNN and SDP-RLT relaxations and demonstrating robust empirical performance, RiNNAL+ advances the state of the art in rigorous, scalable optimization methodology for nonconvex, combinatorial problems arising in operations research, machine learning, and related fields (Hou et al., 18 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)