Primal–Dual Interior-Point Framework
- Primal–dual interior-point frameworks are algorithms that solve constrained optimization problems by following a central path using both primal and dual variables.
- They extend classical Euclidean methods to Riemannian manifolds by replacing standard derivatives with covariant derivatives, retractions, and tangent-space computations.
- The method achieves fast local convergence—superlinear or quadratic—and robust global guarantees under standard regularity and geometric conditions.
A primal–dual interior-point framework is a class of algorithms that solve constrained optimization problems by following a trajectory (central path) through the strictly feasible region of the problem, using both primal and dual variables, via a barrier-augmented Lagrangian and perturbed Karush–Kuhn–Tucker (KKT) conditions. In the generalization to Riemannian manifolds, the framework replaces Euclidean derivatives with Riemannian objects—most notably covariant derivatives and retractions—while fully retaining the primal–dual structure, path-following, and Newton-based direction computation that underpin the success of IPMs in Euclidean domains. Central to the methodology is the use of a primal–dual Newton system constructed and updated in tangent spaces of the product manifold, maintaining interior feasibility at every step and achieving fast (superlinear, sometimes quadratic) local convergence alongside strong global guarantees under standard regularity conditions (Lai et al., 2022).
1. Riemannian Primal–Dual IPM: Mathematical Problem Statement
Let be a -dimensional, connected, complete Riemannian manifold with Riemannian metric on . The general nonlinear constrained optimization problem is
with smooth () scalar functions , , on . Gradients and Hessians are taken with the Levi–Civita connection of the manifold. Slack variables , so , , are introduced; dual multipliers for inequalities; and for equalities, forming the augmented variable .
The barrier-augmented Lagrangian is
with barrier parameter .
2. Barrier-Augmented Primal–Dual Residuals and KKT System
The perturbed KKT (primal–dual) system, written as a vector field , is
where , is the all-ones vector.
Blockwise, this generates:
- Primal gradient residual:
- Primal equality residual:
- Dual complementarity residual:
3. Riemannian Primal–Dual Newton Step
The Newton step is computed by linearizing via its covariant derivative , and solving
in the tangent space, where only has nonzero component in the block. The Hessian block in is the Riemannian Hessian: The full system comprises four coupled blocks; however, the condensed saddle-point system on is formed by block elimination: with incorporating second derivative and scaling terms, mapping dual search direction to the tangent space, and aggregating inequality constraint gradients.
The updates for and are recovered via:
4. Step Selection and Globalization
At each step, primal and dual variables must maintain strict positivity, enforced by a centrality condition. The step length is chosen by two rules:
- Centrality: ,
- Armijo-type decrease on merit function :
with . Backtracking reduces until both criteria are satisfied. The update is performed via the manifold retraction: .
5. Convergence Theorems: Local and Global Guarantees
Local convergence: Given a solution satisfying
- existence (A1),
- Riemannian LICQ at ,
- strict complementarity for active ,
- second-order sufficiency ( positive-definite on the critical subspace),
then the damped-Newton method (with diminishing and step sizes ) converges locally superlinearly (quadratically if scaled as and rapidly) [(Lai et al., 2022), Thm 5.3].
Global convergence: Under Lipschitz continuity of and (under parallel transport), compact level sets, and nonsingularity of , the line-search implementation generates iterates for which and every limit-point is a Riemannian KKT point [(Lai et al., 2022), Thm 6.3].
6. Algorithmic and Geometric Ingredients
- Retraction: A mapping generalizes the exponential map, satisfying and .
- Vector transport: Transports tangent vectors from to , facilitating step acceptance, merit decrease, and matrix Lipschitz estimates.
- Inner linear solves: The condensed Newton system is solved by Krylov-type methods (e.g., Conjugate Residual), exploiting operator action for efficiency and avoiding explicit dense matrix forms, essential on non-Euclidean domains.
- Stopping criteria: Algorithm monitors ; termination occurs once it falls below a prescribed threshold.
7. Numerical Behavior and Applications
Empirical results show that the Riemannian primal–dual interior-point method (RIPM) achieves high accuracy and robust convergence for a variety of nonconvex optimization problems with manifold constraints (Lai et al., 2022). The generalization to the manifold setting, with correct attention to tangent-space differentiability, geometry-aware step selection, and retraction-based updates, preserves, in practice and theory, the desirable stability, fast local convergence, and global path-following guarantee of classical primal–dual IPMs. Numerical comparisons demonstrate that the method matches or exceeds performance of Euclidean competitors in problems with intrinsic manifold structure.
8. Comparison to Classical Euclidean and Other Extensions
The Riemannian framework generalizes all core steps and guarantees of the classical (Euclidean) primal–dual IPM:
- Barrier-augmented Lagrangian and central-path system remain, now on .
- The Newton system incorporates Riemannian gradient and Hessian via the Levi–Civita connection.
- Retraction and vector transport replace Euclidean vector addition and matrix products.
- Convergence theory directly extends, modulo Riemannian KKT nonsingularity, manifold LICQ, and compactness considerations.
This formulation provides a template for further generalizations, including infinite-dimensional manifolds and optimization subject to complex geometric constraints. The robust geometric machinery enables efficient computation and makes the approach broadly applicable in modern geometric optimization (Lai et al., 2022).