Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 35 tok/s Pro
2000 character limit reached

Sequential Convex Programming

Updated 20 September 2025
  • Sequential Convex Programming is a method that iteratively solves convex approximations of nonconvex problems by linearizing constraints around the current iterate.
  • It employs trust-regions and regularization techniques to maintain feasibility and ensure local convergence under standard regularity conditions.
  • SCP is widely applied in optimal control, trajectory planning, network utility maximization, and power system operations, bridging theory and practice.

Sequential convex programming (SCP) is a class of iterative algorithms for solving nonconvex optimization problems by sequentially forming and solving convex approximations. In particular, SCP is characterized by the repeated linearization or convexification of nonconvex components—such as equality/inequality constraints or objective functions—about the current iterate. Each iteration requires the efficient solution of a convex subproblem, frequently leveraging established convex optimization tools, with the solution of each subproblem serving as the linearization point in the subsequent iteration. SCP has been theoretically and practically validated in a wide range of contexts, including optimal control, nonlinear programming, parametric optimization, robust quantum control, difference-of-convex programming, network utility maximization, stochastic and continuous-time optimal control, trajectory optimization, and reliability-aware power systems operations.

1. Key Principles of Sequential Convex Programming

SCP operates by reformulating a nonconvex (or otherwise challenging) optimization problem as a sequence of tractable convex subproblems. The general structure of the problem class addressed by SCP can be written as:

minxf(x)subject tog(x)0,h(x)=0,xΩ\min_{x} \quad f(x) \quad \text{subject to} \quad g(x) \leq 0, \quad h(x) = 0, \quad x \in \Omega

with nonconvex (possibly nonlinear) functions f,g,hf, g, h and a convex set Ω\Omega. At iteration kk, the method constructs convex (typically affine or quadratic) approximations of the nonconvex parts near the current iterate x(k)x^{(k)}, yielding a convex subproblem:

minxΩfcvx(k)(x)subject togcvx(k)(x)0,  hcvx(k)(x)=0\min_{x \in \Omega} \quad f_{cvx}^{(k)}(x) \quad \text{subject to} \quad g_{cvx}^{(k)}(x) \leq 0, \; h_{cvx}^{(k)}(x) = 0

The typical SCP iteration is:

  1. Linearize or convexify nonconvex constraints and objectives about x(k)x^{(k)}.
  2. Solve the resulting convex program to obtain x(k+1)x^{(k+1)}.
  3. Repeat until convergence (as judged by some optimality or stationarity criterion).

A trust-region or regularization term is often employed to ensure that each new iterate remains “close” to the previous one, thereby guaranteeing the validity of the convexification and enhancing convergence robustness.

2. Theoretical Foundations and Convergence

SCP methods are underpinned by rigorous contraction and convergence results in various settings. Under standard regularity conditions—including smoothness, strong regularity of the constraints, boundedness of Hessian or second-derivative information, and small parameter variations—SCP can be shown to exhibit at least local convergence to a Karush–Kuhn–Tucker (KKT) point of the original problem.

For real-time optimal control and parametric optimization, precise contraction estimates are available:

z(k+1)zˉ(k+1)ωkz(k)zˉ(k)+ckM(ξk+1ξk)\|z^{(k+1)} - \bar{z}^{(k+1)}\| \leq \omega_k \|z^{(k)} - \bar{z}^{(k)}\| + c_k \|M(\xi_{k+1} - \xi_k)\|

where 0<ωk<10 < \omega_k < 1 is a contraction factor, ck>0c_k > 0 is a constant quantifying the effect of parameter change, and z(k)z^{(k)}, zˉ(k)\bar{z}^{(k)} are the algorithmic and “true” KKT points, respectively (Quoc et al., 2011, Dinh et al., 2011). If parameter variation is absent or negligibly small, linear convergence is recovered.

For continuous-time optimal control, SCP-generated sequences are proven to converge (in appropriate function spaces) to trajectories and adjoint variables satisfying the Pontryagin Maximum Principle, even when state or control variables evolve on manifolds (Bonalli et al., 2020, Bonalli et al., 2019). Stochastic SCP extends this result to systems driven by a multidimensional Wiener process, with accumulation points satisfying the stochastic PMP (Bonalli et al., 2020).

Descent properties and monotonicity are established by means of various inequalities relating decrease in the objective to the step norm, especially in the context of difference-of-convex (DC) problems and structured nonlinear programming (Quoc et al., 2011, Yu et al., 2020, Lu, 2012).

3. Algorithmic Variants and Structural Extensions

SCP has spawned a variety of algorithmic enhancements and tailored specializations:

  • Real-Time Sequential Convex Programming (RTSCP): Processes only one convexification per time step to track the moving optimum as the system parameter evolves, providing computational efficiency for real-time nonlinear model predictive control (Quoc et al., 2011).
  • Adjoint-based Predictor–Corrector SCP (APCSCP): Employs inexact Jacobian approximations to accelerate large-scale NMPC, particularly when full Jacobian evaluation is prohibitive; convergence guarantees are provided when adjoint-based corrections are incorporated (Dinh et al., 2011).
  • DC-Constrained SCP: Handles general nonconvex constraints of DC form g(x)=u(x)v(x)g(x) = u(x) - v(x) by linearizing only the concave term; includes relaxation for inconsistent linearizations using slack variables and penalization (Quoc et al., 2011).
  • Structured Nonlinear Programming SCP: Incorporates nonmonotone step acceptance schemes and adaptive local Lipschitz constants to enhance practical performance in structured problems (Lu, 2012).
  • Monotone Line Search SCP (SCPls_{ls}): Integrates a sufficient decrease and feasibility-preserving line search for DC programming, with convergence rate analyses based on the Kurdyka–Łojasiewicz property (Yu et al., 2020).
  • Prox-linear SCP: Applies proximal regularization and exact penalization; often used in large-scale trajectory optimization and GPU-accelerated Monte Carlo analyses (Chari et al., 28 Apr 2024).
  • Penalty Homotopy SCP: Gradually enforces complementarity constraints in quadratic programs via homotopy on the penalty parameter, enabling efficient solution of QPs with linear complementarity (Hall et al., 2021).

4. Practical Applications Across Domains

SCP has demonstrated effectiveness in a range of real-world and computationally challenging settings:

  • Optimal Control: Iterative convexification of nonlinear or stochastic dynamics, including applications in trajectory optimization (e.g., lunar descent, rendezvous, hydroelectric plant NMPC, multiagent quadrotor coordination), robust quantum gate control (with parametric and stochastic uncertainties), and power systems operations (Quoc et al., 2011, Dinh et al., 2011, Bonalli et al., 2019, Kosut et al., 2013, Zhang et al., 17 Oct 2024, Yuan et al., 19 Aug 2025).
  • Network Utility Maximization (NUM): Nonconvex utility functions arising from S-curve (sigmoidal) utilities addressed by transforming the objective and convexifying the resulting reverse-convex constraints; leads to distributed, tractable rate allocation algorithms (Sehati et al., 2011).
  • Parametric Markov Decision Processes: Verification and parameter synthesis via convexification and geometric programming within a sequential framework, delivering scalability to problems with thousands of states or parameters (Cubuktepe et al., 2017).
  • Reliability-Aware Power Systems: Optimization under decision-dependent uncertainty handled by linearizing nonlinear reliability (logistic) functions, leading to tractable iterative procedures for dispatch with reliability constraints (Zhang et al., 17 Oct 2024).
  • Continuous-time Multiagent Trajectory Optimization: Filtering-based warm-start strategies in conjunction with SCP ensure tight constraint satisfaction and drastically reduced computation time in multiagent quadrotor problems (Yuan et al., 19 Aug 2025).
  • Stochastic Control: SCP adapted for non-linear stochastic systems drives accumulation points to satisfy the stochastic PMP, with practical deterministic transcriptions leveraged for computational implementation (Bonalli et al., 2020, Echigo et al., 25 Apr 2024).

5. Implementation Considerations and Computational Aspects

  • Convexification Techniques: Linearization or inner-convex approximation using local Taylor series expansions; higher-order terms may be managed via convex overestimators or regularizers. Inner-convexification ensures recursive feasibility and is compatible with trust-region or proximal strategies (Virgili-Llop et al., 2018).
  • Handling of Constraints: Nonconvexities in equality, inequality, and manifold constraints are addressed via linearization, auxiliary variable reformulation, and proximal penalties. In power systems applications, absolute values or squares in decision-dependent failure models are managed via auxiliary variable introduction and epigraph constraints (Zhang et al., 17 Oct 2024).
  • Solver Architecture: SCP can utilize both second-order (interior-point or active-set) and first-order (projected gradient, proportional-integral projected gradient) methods for the convex subproblems. First-order, factorization-free solvers such as PIPG have been shown to be compatible with GPU acceleration and large-scale Monte Carlo parallelization, as in 6-DoF powered-descent guidance (Chari et al., 28 Apr 2024, Chari et al., 7 Feb 2024).
  • Initialization and Warm-Starting: Practical performance, especially in multiagent problems, is sensitive to initialization. Filtering-based warm-starts—wherein an initial guess is generated that already nearly satisfies constraints, often using a Bayesian state estimation analogy—improve both objective value consistency and constraint satisfaction, and often reduce computation time by orders of magnitude compared to random starts (Yuan et al., 19 Aug 2025).
SCP Variant Typical Application Key Algorithmic Feature
RTSCP Real-time MPC/OC Single convexification per time step
APCSCP Large-scale NMPC Adjoint-based inexact Jacobian, efficient updates
DC-constrained SCP DC programming, MPCC Concave part linearization, slack variable relax.
Filtering-based warm-start SCP (Editor's) Multiagent trajectory opt. Online Bayesian estimate initialization
GPU-accelerated prox-linear SCP Monte Carlo analysis Matrix-inverse-free, parallelizable prox-solver

6. Advantages, Limitations, and Future Directions

Advantages:

  • Computational efficiency due to convex subproblem solutions and rapid convergence under proper regularity.
  • Flexibility in modeling; applicable to problems with nonlinear, DC, or complementarity constraints, as well as manifold, stochastic, and parametric variants.
  • Feasibility guarantees when inner-convex approximations and trust-regions are employed, as well as global convergence to stationarity under appropriate conditions.
  • Compatibility with distributed and parallel computing architectures, including GPU acceleration.

Limitations:

  • Local convergence guarantees; the method’s iterates are guaranteed to converge only when initialized close to a solution and when the parameter changes or nonlinearities are not too large.
  • Dependence on smoothness and regularity assumptions; poorly conditioned or highly nonlinear problems may not meet contraction or descent criteria.
  • Potential sensitivity to initialization; filtering or problem-specific warm starts are often necessary to avoid poor convergence or infeasibility.
  • The necessity of accurate local convexification; significant model mismatch or rapid changes in system behavior may challenge the method's reliability.

Future directions include:

  • Extension to more general classes of nonconvex (including non-differentiable) cost and constraint structures.
  • Integration of robustness (e.g., to model mismatch, uncertainty, disturbances) in the SCP framework, especially for real-time MPC and cyber-physical systems control.
  • Further development of adaptive and learning-based convexification strategies, as well as improved computational frameworks for large-scale, real-time industrial deployments.
  • Deeper paper of the trade-offs between convergence rate, computational overhead, and solution quality across SCP variants and applications.

7. Summary

Sequential convex programming provides a unifying, reliably convergent, and computationally efficient framework for tackling nonconvex optimization problems in engineering and scientific applications. Through iterative convexification, rigorous convergence analysis, and rapidly maturing implementation practices—including real-time, stochastic, multiagent, and GPU-parallelized settings—SCP continues to play a central role in modern algorithmic optimization and embedded control, as elaborated in foundational and applied research (Quoc et al., 2011, Quoc et al., 2011, Dinh et al., 2011, Bonalli et al., 2019, Virgili-Llop et al., 2018, Yuan et al., 19 Aug 2025, Zhang et al., 17 Oct 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Sequential Convex Programming.