Semidefinite Programming: Theory & Algorithms
- Semidefinite programming is a convex optimization framework that minimizes a linear functional over the intersection of an affine space and the cone of positive semidefinite matrices.
- Algorithmic methods such as interior-point, LP/SOCP relaxations, and first-order techniques balance accuracy and computational cost in solving SDPs.
- Applications include combinatorial optimization, quantum information, and control systems, where SDP relaxations provide strong certificates and high-quality approximations.
A semidefinite programming (SDP) problem is a convex optimization problem in which a linear functional is minimized (or maximized) over the intersection of an affine space and the cone of positive semidefinite (PSD) matrices. SDPs generalize linear programs (LPs) and include, as special cases or relaxations, many important problems in combinatorial optimization, quantum information, control, and statistics. The standard form for the primal SDP is
where denotes the cone of symmetric positive semidefinite matrices, are given data, and is given (Roig-Solvas et al., 2022, Skrzypczyk et al., 2023). SDP duality follows the general convex programming framework, and strong duality holds under mild regularity conditions (e.g., Slater's condition).
1. Standard Formulations and Duality
SDPs are formulated in primal-dual pairs. The primal problem is
where is a linear map defined by . The dual problem is
with dual slack matrix (Skrzypczyk et al., 2023). Under strong duality, optimal values coincide (), and complementary slackness holds (). The set is a proper self-dual cone, making conic duality theory applicable.
2. Algorithmic Methods for SDP
2.1 Interior-Point Methods
Interior-point algorithms deploy logarithmic barrier functions such as to enforce strict feasibility (), with updates computed via Newton's method applied to the KKT system. The theoretical complexity for -accuracy is iterations, with each iteration requiring the solution of large linear systems ( arithmetic), limiting applicability to moderate (Skrzypczyk et al., 2023). Nevertheless, they yield robust, high-accuracy solutions, and are the mainstay in mature solvers (e.g., SeDuMi, SDPT3, MOSEK).
2.2 LP/SOCP Approximation and Structured Subsets
Recent developments utilize tractable conic inner-approximations of . The cones of diagonally dominant (DD) and scaled diagonally dominant (SDD) matrices serve as LP- and SOCP-representable subsets, respectively (Roig-Solvas et al., 2022, Miller et al., 2019):
- DD: ()
- SDD: diagonal, s.t.
SDPs with become LPs; with , SOCPs. Notably, the globally convergent algorithm of Roig-Solvas & Sznaier alternates decrease and centering phases, solving a sequence of LP/SOCP subproblems via Cholesky-basis changes and interior-point centering (Roig-Solvas et al., 2022). Convergence is provably polynomial (and logarithmic in ), making LP/SOCP-based frameworks attractive for general SDPs.
Structured subset methods exploit problem sparsity or symmetry by decomposing PSD constraints into blocks on chordal or group-theoretic cliques, imposing tractable cones on large blocks and full PSD on small ones, often achieving significant speedups and maintaining bound tightness (Miller et al., 2019).
2.3 First-Order and Sketching Methods
Storage and arithmetic complexity for standard SDP solvers scales as or worse, motivating algorithms with reduced memory footprints or fast per-iteration costs:
- Conditional Gradient / Frank–Wolfe: Maintain low-rank iterates, updating along extreme directions (rank-one PSDs), with update rules .
- Sketching (Nyström): Track a sketch for random , reconstruct low-rank approximants at convergence (Yurtsever et al., 2019).
In the weakly constrained regime—where , optimal solutions are generically low rank (Barvinok–Pataki bound)—these methods extend SDPs to scale up to (Ding et al., 2019, Yurtsever et al., 2019). For such problems, approximate complementarity principles enable primal recovery from a small-dimensional eigenspace of the dual slack.
2.4 Low-rank and Nonconvex SDP Factorizations
When the optimal solution is low-rank, Burer–Monteiro or related factorization approaches encode , yielding a nonconvex problem in with . Advanced Riemannian optimization can guarantee convergence to global optima under generic rank conditions and offers sublinear to local-linear rates (Tang et al., 2023). Variants exploiting biconvexity with two factors () and quadratic penalties further exploit structure and may offer improved per-iteration efficiency (Hu, 2018).
2.5 Entropic and Bundle Regularization
Entropic regularization integrates von Neumann entropy as a strongly convex barrier, yielding duals easily solved by randomized trace estimation and effective for Max-Cut, spectral embedding, and eigenvalue projector tasks (Lindsey, 2023). Polyhedral bundle methods build sharp piecewise-linear underapproximations of the PSD cone via rank-one supporting hyperplanes, reducing the problem to a sequence of QPs and carefully capping bundle size based on rank heuristics for practical speed (Cui et al., 14 Oct 2025).
3. Applications of SDP
SDPs provide relaxations and certificate frameworks for core problems:
- Combinatorial Optimization: Max-Cut, graph partition, quadratic assignment, and binary quadratic programs all admit tight SDP relaxations, with integrality constraints yielding mixed-integer SDPs (MISDPs) for exactness (Meijer et al., 2023, Xu et al., 2014).
- Polynomial Optimization: Globally solving polynomial (including bilevel) programs via SOS/moment hierarchies, each level requiring the solution of an SDP of increasing size, converging monotonically to the optimum (Jeyakumar et al., 2015).
- Quantum Information: Quantum state and channel estimation, entanglement detection, and quantum measurement incompatibility are all mapped to SDPs; many exploit the self-duality of and the linear structure of quantum constraints (Skrzypczyk et al., 2023).
- Control and Systems: -norm bounds and Lyapunov stability conditions are encoded as LMIs (special SDPs), often with exploitable problem sparsity (Miller et al., 2019).
4. Complexity, Scalability, and Pathological Instances
For generic feasible SDPs,
- Interior-point and analytic-center methods scale as arithmetic, with practical limits at .
- Sketching/low-rank SDP methods and factorization track only data and extend computation to in weakly constrained cases (Ding et al., 2019, Yurtsever et al., 2019).
- Pathological examples (Khachiyan-type SDPs) show that feasible solutions may require doubly-exponential bit-length in the worst case, as determined by the singularity degree of the dual, and such pathologies are not rare (Pataki et al., 2021).
A table summarizing key algorithmic regimes:
| Class | Core Per-Iter Complexity | Practical Limit | Applicability |
|---|---|---|---|
| Interior-point | arith., mem | ~ | General SDPs |
| First-order/sketch | mem, arith. | ~ | Weakly constrained, low-rank |
| Burer–Monteiro/Riem. | mem, χ, Hessian/direction-finding | ~ | Low-rank optimum, general SDPs |
| LP/SOCP relax. | , subprobl. | ~ | General via interior cones |
| Polyhedral bundle | QP in bundle size, | ~ | Low/medium rank, explicit cones |
5. Quantum and Entropic Algorithms
Quantum algorithms for SDP, such as the quantum ADMM and variational quantum approaches, leverage the block-encoding of data matrices and polynomial proximal operators for PSD constraints. Quantum singular value transformation (QSVT) is used to implement projections onto the PSD cone efficiently. Ergordic convergence rates are , with quantum resource scaling polynomially in and inversely in (Nie et al., 11 Oct 2025, Patel et al., 2021).
Entropically regularized SDP duals exploit von Neumann entropy for strong convexity, enabling fast convergence and stochastic-trace estimation for large sparse matrices and specific problems, including Max-Cut and extremal eigenproblems (Lindsey, 2023).
6. Limitations, Open Problems, and Future Directions
While substantial advances extend the range of tractable SDPs, significant limitations persist:
- Storage-complexity and per-iteration cost still scale poorly for dense, highly constrained or high-accuracy SDPs.
- Factorization methods may encounter spurious local minima above certain rank thresholds, and their theoretical guarantees depend on problem data and structure (Tang et al., 2023).
- Practical performance of LP/SOCP or polyhedral approximations still lags mature SDP solvers except in favorable regimes (Cui et al., 14 Oct 2025, Roig-Solvas et al., 2022).
- Pathological instances exhibiting exponential solution bit length are common, and no general polynomial-time feasibility algorithms are known (Pataki et al., 2021).
Open research priorities include:
- Automated exploitation of chordal sparsity and group symmetry in large-scale SDPs.
- Guarantees for convergence and tightness in composite structured-subset hierarchies.
- Development of hybrid classical/quantum algorithms with competitive runtime and memory.
- Theoretical bounding of rank-growth in low-rank adaptive methods.
- Strategies to manage or certify solution shape in the presence of exponential-size certificates.
SDP research thus proceeds on multi-fronts: algorithmic efficiency, exploitability of structure, integrated discrete optimization, and sound exploitation of quantum and high-performance classical computing (Nie et al., 11 Oct 2025, Yurtsever et al., 2019, Meijer et al., 2023).
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free