Primal-Dual Infeasible Interior-Point Framework
- Primal-dual infeasible interior-point framework is a class of algorithms that solves convex optimization problems by enforcing feasibility progressively through penalty functions and barrier regularization.
- It decouples primal and dual updates via operator splitting and proximal mappings, enabling flexible handling of infeasible starting points.
- The method achieves robust convergence rates and efficient certificate generation, proving advantageous in large-scale and distributed optimization settings.
A primal-dual infeasible interior-point framework is a class of algorithms for convex optimization, conic programming, and saddle-point problems in which iterates are not required to remain feasible with respect to either the primal or the dual constraints at each iteration. Instead, feasibility is progressively enforced through penalization (typically barrier functions), proximal mappings, relaxation variables, or perturbations, while the iterates are guided toward optimality via the central-path system. This approach contrasts with feasible-start interior-point methods, where the algorithm is initialized from an interior feasible point, and retains numerous advantages in terms of flexibility, ability to handle infeasibility, certificate generation, and computational efficiency.
1. Theoretical Foundations and Saddle-Point Structure
Primal-dual infeasible IP frameworks are rooted in the reformulation of convex optimization and variational inequalities as saddle-point problems. For a broad class of convex programs—such as those of the form
the optimality conditions can be written as a monotone inclusion: The essential step is to reformulate the algorithmic update at each iteration as an inexact proximal (or preconditioned proximal) point method: where is a positive semidefinite preconditioner that may, for example, encode a barrier term or a Hessian approximation. This broad perspective enables, for instance, the decoupling and alternating minimization of the primal and dual subproblems via operator splitting, barrier regularization, or both (Valkonen, 2017, Karimi et al., 2018).
The infeasible-start approach is formalized by augmenting the problem with artificial variables (e.g., the “homogeneous self-dual embedding” or an auxiliary variable ). For example, in domain-driven formulations, infeasibility is managed by embedding the original constraints into a lifted variable space: where is an arbitrarily chosen strictly interior point (Karimi et al., 2018, Karimi et al., 2019).
2. Barrier Regularization and Infeasibility Management
A central mechanism in these frameworks is the use of convex barrier functions (logarithmic, entropic, or general self-concordant barriers) to enforce strict interiority with respect to the cone or general convex set . For symmetric cones, the dual barrier
with gradient , provides strong monotonicity, ensuring that iterates stay strictly in the interior and that approaches to the boundary incur an infinite penalty. This is leveraged to regularize otherwise ill-posed subproblems and can be formalized as solving barrier-proximal subproblems in the dual
where the “central path” parameter controls the infeasibility measure and guides iterates along a trajectory interpolating between non-feasible and optimal solutions (Valkonen, 2017).
In the domain-driven approach, infeasibility is handled by maintaining an explicit proximity to the interior via a combination of augmented Lagrangian terms and barrier penalties in both primal and dual spaces. These strategies allow for robust progress—even from infeasible initializations—by artificially absorbing infeasibility into slack or penalty variables and gradually pushing both the primal and dual iterates toward feasible regions (Karimi et al., 2018, Karimi et al., 2019).
3. Algorithmic Structure: Decoupling and Operator Splitting
Decoupling of primal and dual updates is achieved via careful design of the preconditioner , which is typically chosen to be block-diagonal or to annihilate the off-diagonal coupling terms in the KKT system. This enables alternating or semi-implicit updates, for instance, an explicit primal Euclidean-proximal step: and a nonlinear barrier-regularized dual update. This structure generalizes methods like Chambolle–Pock and ADMM, but with a dual step adaptive to the geometry of the underlying cone (Valkonen, 2017, Neuenhofen, 2018).
Further algorithmic approaches include the use of proximal point iterations with spectrum- or Hessian-based preconditioners, Schur complement reduction to efficiently handle coupling constraints, and iterative refinement or smoothing strategies for non-symmetric and domain-driven settings (Karim et al., 2021).
4. Convergence Analysis and Rates
Convergence guarantees, including global convergence and rates, depend on the geometry of the regularization and the underlying cones. For general symmetric cones, the analysis yields an (sublinear) rate on squared distance (ergodic convergence), based on strong monotonicity or strong convexity estimates of the dual barrier regularization—even if the original dual function is not strongly convex (Valkonen, 2017). For the important case of the second-order cone (where G is strongly convex and complementarity conditions are nondegenerate), linear convergence can be established: Domain-driven and self-dual embedding frameworks achieve the best known theoretical iteration complexity for barrier parameter , matching conic optimization bounds (Karimi et al., 2018, Karimi et al., 2019, Papp et al., 22 Feb 2025). Notably, rigorous status determination (optimality, infeasibility, unboundedness) and robust certificates are supported, as certificates can be extracted from the limiting behavior of scaled iterates.
5. Handling Infeasibility and Status Determination
A defining attribute of primal-dual infeasible interior-point frameworks is the explicit detection and management of infeasibility. This is achieved through the design of termination criteria based on scaled residuals, duality gaps, and proximity to supporting hyperplanes or critical cones. For example, in the domain-driven context, approximate certificates of infeasibility are computed when scaled dual feasibility measures satisfy: while unboundedness is detected when the objective escapes below a threshold: These conditions can be made rigorous even in the absence of feasibility (e.g. by using strict or weak detectors or by projection with respect to the barrier’s local metric) (Karimi et al., 2019).
Additionally, perturbation analysis reveals that limit points of infeasible interior-point sequences may converge to values between primal and dual optimal values in the presence of a duality gap—a property useful for bounding objective values in the context of singular SDPs and mixed-integer relaxations (Tsuchiya et al., 2019).
6. Practical Algorithms and Computational Considerations
Primal-dual infeasible interior-point frameworks have been instantiated in a wide variety of algorithmic environments:
- Barrier-based Alternating Minimization: Alternating proximal algorithms and path-following updates using barrier-proximal subproblems for either primal or dual variables (Valkonen, 2017, Neuenhofen, 2018).
- Domain-Driven Interior-Point Methods: Infeasible-start, central-path-tracking algorithms with self-concordant barrier proximity and explicit artificial variables for feasibility augmentation (Karimi et al., 2018, Karimi et al., 2019).
- Preconditioning and Inexact Solves: Schur complement reductions and preconditioned Krylov iterative solvers for large-scale problems, employing strategies to minimize the number of unique eigenvalues and thus CG or MINRES iterations (Karim et al., 2021, Bergamaschi et al., 2019).
- Hybrid Regularization: Inclusion of proximal, penalty, and barrier terms in the KKT system to handle ill-conditioning and rank-deficiency; blending of interior-point methods with the proximal method of multipliers to exploit both stability and polynomial complexity (Pougkakiotis et al., 2019, Pougkakiotis et al., 2020).
- Tensorized and Second-Order Extensions: Generalization to settings such as tensor-train decompositions in high-dimensional SDPs (Kelbel et al., 15 Sep 2025), distributed (Gauss–Jacobi–Newton) decompositions for coupled nonlinear systems (Ali et al., 22 Sep 2024), and mixed primal–primal-dual strategies (Neuenhofen, 2018).
- Adaptive Updates of Barrier and Penalty Parameters: Use of parameter sequences (barrier parameter , primal/dual scaling parameters, preconditioning regularization) that are updated in coordination with the progression toward feasibility and duality gap reduction (Haeser et al., 2017, Pougkakiotis et al., 2019).
Empirical studies consistently report robustness and improved scalability compared to feasible-start methods, performance advantages over ADMM and non-regularized IPMs in large-scale settings, and competitive polynomial-time iteration complexity in well-posed problems.
7. Summary and Significance
Primal-dual infeasible interior-point frameworks unify several algorithmic innovations:
- Barrier or penalty regularization not only enforces strict interiority (even for infeasible starting points) but also improves strong convexity and monotonicity in possibly degenerate or weakly convex settings.
- Decoupling of the KKT system into tractable subproblems—often via explicit or implicit operator splitting—enables flexible, structure-exploiting updates scalable to large-scale or distributed environments.
- Iterates systematically approximate the central path, and rigorous certificates of infeasibility or unboundedness are computable via scaled residuals and duality theory, even when true feasibility is unattainable.
- The convergence rates attained (linear for structured cones/strong curvature, for general convex cones) match or improve on best known results for feasible-start or ergodic methods, with computational designs (e.g., preconditioning, regularization, hybrid direction selection) enabling efficient practical implementation.
This framework thus serves as the theoretical and practical backbone for modern large-scale optimization, bridging classic interior-point methods, first-order splitting techniques, and advanced regularization and preconditioning schemes. Theoretical advances continue to refine robustness, certificate generation, and complexity guarantees, with ongoing research into novel problem classes, distributed optimization, and hybrid algorithm designs (Valkonen, 2017, Karimi et al., 2018, Karimi et al., 2019, Tsuchiya et al., 2019, Karim et al., 2021, Gao et al., 24 Nov 2024, Ali et al., 22 Sep 2024).