Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sequential Convex Optimization Framework

Updated 2 February 2026
  • Sequential convex optimization is a framework that solves nonconvex problems by iteratively approximating them with tractable convex subproblems.
  • Its methodology leverages linearizations, trust regions, and slack-based relaxations to ensure robust convergence and maintain feasibility.
  • Widely applied in control, trajectory planning, and game theory, it delivers robust solutions in complex decision-making scenarios.

Sequential convex optimization is a powerful algorithmic paradigm for solving nonconvex optimization problems by iteratively constructing and solving a sequence of convex surrogate subproblems that approximate the original problem in the neighborhood of the current iterate. This framework encompasses methods such as sequential convex programming (SCP), trust-region sequential convex optimization, convex–concave procedures for difference-of-convex (DC) programs, and their modern extensions. The attractiveness of sequential convex optimization lies in the amenability of convex subproblems to efficient global solution, robust convergence theory under mild assumptions, and broad applicability across control, trajectory planning, DC programming, game theory, and robust/nonlinear system analysis.

1. General Problem Formulation and Scope

Sequential convex optimization addresses nonconvex minimization problems of the form

minxΩ f0(x)subject tofi(x)0,i=1,,m,\min_{x\in\Omega}\ f_0(x)\quad\text{subject to}\quad f_i(x)\le 0,\quad i=1,\ldots,m,

where the functions f0,f1,,fmf_0, f_1, \ldots, f_m may be nonconvex but possess a structure (e.g., DC decomposition, smoothness, or local convexity under linearization) that enables local convex approximations. The feasible set Ω\Omega is typically convex and closed, but nonconvexities may appear in constraints as differences-of-convex functions, as in

fi(x)=ui(x)vi(x)0,ui,viΓ0(Rn) convexf_i(x) = u_i(x) - v_i(x)\le 0,\quad u_i,v_i\in\Gamma_0(\mathbb{R}^n)\ \text{convex}

(Quoc et al., 2011). In robust and control settings, additional parametric and dynamic constraints are present, and in mixed-integer or contact-implicit problems, complementarity and rounding steps may appear (Sambharya et al., 13 Nov 2025, Li et al., 3 Feb 2025). The framework also encompasses sequential decision processes and trajectory optimization with equality, inequality, and possibly logic/discrete constraints.

2. Core Sequential Convex Optimization Algorithms

At each iteration kk, sequential convex optimization constructs a convex surrogate subproblem reflecting the local behavior of the original problem near the current iterate xkx^k. The most canonical approach is to preserve convex components and linearize concave or nonlinear terms, exemplified by the DC-programming method: At xki: fi(x)=ui(x)vi(x)ui(x)vi(xk)(Ξik)(xxk)\text{At }x^k\text{,}\ \forall i:\ f_i(x)=u_i(x) - v_i(x) \approx u_i(x) - v_i(x^k) - (\Xi_i^k)^\top (x-x^k) with Ξikvi(xk)\Xi_i^k\in\partial v_i(x^k) a subgradient, yielding a convex constraint (Quoc et al., 2011). The convex subproblem is then

{minxRnf0(x)+ρ2xxk2 s.t.ui(x)vi(xk)(Ξik)(xxk)0, i=1,,m xΩ.\begin{cases} \min_{x\in\mathbb{R}^n} & f_0(x) + \frac{\rho}{2}\|x-x^k\|^2 \ \text{s.t.} & u_i(x) - v_i(x^k) - (\Xi_i^k)^\top(x-x^k)\le 0,\ i=1,\ldots,m \ & x\in\Omega. \end{cases}

An analogous strategy is used in trust-region sequential convex optimization for trajectory planning, where nonconvex constraints are linearized at a reference trajectory and embedded within a trust region whose radius adapts according to model fidelity and progress: xxkδk,accept/reject and adapt δk via reduction ratio ρk\|x - x^k\| \leq \delta_k,\quad \text{accept/reject and adapt } \delta_k\ \text{via reduction ratio } \rho_k (Chen et al., 6 Jun 2025, Bonalli et al., 2019). Proximal or penalty terms can be included for regularization and to control steps.

Relaxations with slack variables enable feasibility when linearizations are inconsistent: ui(x)vi(xk)(Ξik)(xxk)si, si0,μisi penaltyu_i(x) - v_i(x^k) - (\Xi_i^k)^\top(x-x^k)\le s_i,\ s_i\ge 0,\quad \mu\sum_i s_i\ \text{penalty} (Quoc et al., 2011). Taylor- or Hessian-based inner-convex approximations, sampled interpolation bundles, and smoothing/penalized merit functions extend the possible surrogate constructions (Virgili-Llop et al., 2018, Tracy et al., 30 Sep 2025). In derivative-free settings, convex subproblems may be constructed via sample-based interpolation and the bundle method, where the iterate is a convex combination of sampled states (Tracy et al., 30 Sep 2025). For parametric and online problems, predictor-corrector approaches and adjoint-based corrections accelerate tracking (Dinh et al., 2011).

3. Theoretical Guarantees: Convergence and Feasibility

The sequential convex optimization framework enjoys strong theoretical guarantees under general assumptions:

  • Global Feasibility Preservation: Inner-approximation property ensures that if the starting point is feasible and inner-convex approximations are used, all iterates remain feasible for the original problem (Quoc et al., 2011, Virgili-Llop et al., 2018).
  • Global Convergence to Stationary Points: Under convexity or mild regularity, every accumulation point of the sequence of iterates is a Karush–Kuhn–Tucker (KKT) point of the original nonconvex problem, and step norms tend to zero (Quoc et al., 2011, Bonalli et al., 2019).
  • Descent/Ascent Property: The sequence of objective values is nonincreasing (for minimization) or nondecreasing (for maximization in ascent algorithms), strictly monotone under certain conditions, and bounded below (or above), ensuring convergence of the objective sequence (Virgili-Llop et al., 2018, Bingane, 2020).
  • Quadratic or Linear Rate: With Taylor-based inner-convexification and smoothness, local quadratic convergence is provable (Virgili-Llop et al., 2018). Standard first-order SCP steps achieve at least linear convergence under strong regularity.
  • Worst-case Guarantees: Recent verification frameworks encode SCP iterations in a single (mixed-integer) QCQP to yield exact worst-case bounds over parametric families, validating convergence and robustness in a global sense (Sambharya et al., 13 Nov 2025).

The key assumptions are preservation of feasibility under the surrogate model, boundedness of iterates and regularity of the convexified constraints, and—when present—sufficient strength of convexification or regularization to prevent vanishing trust regions.

4. Applications Across Domains

Sequential convex optimization has seen broad adoption and validation across diverse domains:

  • DC-Constrained Programming: The SCP-DC method provides global convergence (to KKT points) and practical success for large-scale nonconvex quadratic constrained quadratic programs (QCQPs) and mathematical programs with complementarity constraints (MPCC); relaxations and slack penalties ensure feasibility even in the presence of inconsistent linearizations (Quoc et al., 2011).
  • Trajectory and Path Planning: Trust-region SCP and sample-based bundle methods, derivative-free or otherwise, underpin real-time optimal guidance for vehicles including drones, rockets, and cars, offering fast, high-fidelity solutions with constraint satisfaction across trajectory (Chen et al., 6 Jun 2025, Bonalli et al., 2019, Huang et al., 2017, Yuan et al., 19 Aug 2025, Tracy et al., 30 Sep 2025, Kamath et al., 2022).
  • Robust Optimization and Feasibility Certification: Sequential convex restriction, via convex sufficient conditions, provides guarantees of robust feasibility under bounded uncertainty and applicability to polynomial optimization and nonlinear network flow problems (Lee et al., 2019).
  • Game Theory and Decision Processes: Laminar regret decomposition transforms general sequential decision processes and extensive-form games into tractable convex subproblems at each node, recovering and generalizing counterfactual regret minimization (CFR) and supporting regularized and quantal response equilibria (Farina et al., 2018).
  • Contact-Implicit and Hybrid Systems: Convexified primal QP SCP with trust-region and merit function globalization achieve robust solution of contact-implicit (complementarity-constrained) motion planning, even in the face of constraint qualification failure (Li et al., 3 Feb 2025).
  • Differentiable Parameter Optimization: Differentiable SCP platforms enable end-to-end learning and differentiable optimization of algorithmic hyperparameters and vehicle/mission parameters, propagating exact gradients through all SCP layers (Xu et al., 3 Dec 2025).

5. Algorithmic Structures and Modern Enhancements

Various enhancements reinforce the practical applicability, flexibility, and computational performance of sequential convex optimization:

  • Trust Region Adaptation: Step acceptance and radius updates governed by modeled and actual reduction ratios (e.g., ρ=ΔJactual/ΔJpred\rho = \Delta J_\text{actual} / \Delta J_\text{pred}) guarantee robustness and enable aggressive or conservative progression (Chen et al., 6 Jun 2025, Bonalli et al., 2019).
  • Slack-Based Relaxation and Penalty Adjustments: L1L_1 or L2L_2 slack penalties ensure subproblem feasibility, continuous progress on infeasible linearized constraints, and balanced objective–feasibility trade-offs (Quoc et al., 2011).
  • Adaptive and Derivative-Free Initialization: Bayesian filtering and bundle interpolation warm-starts leverage statistical models for initialization and foster rapid convergence in online and high-dimensional scenarios (Yuan et al., 19 Aug 2025, Tracy et al., 30 Sep 2025).
  • Adjoint and Predictor–Corrector Acceleration: Implicit function theorem–based predictor steps and adjoint corrections facilitate path-tracking across parametric or online problems, enabling real-time nonlinear model predictive control in large-scale systems (Dinh et al., 2011).
  • High-Performance Solvers and Scalability: The convex QP/SOCP/SOCP subproblems, often sparse and structured, are amenable to efficient interior-point, first-order projection, and GPU-accelerated solvers, yielding per-iteration times suitable for embedded and real-time deployment (Kamath et al., 2022, Xu et al., 3 Dec 2025).

6. Limitations, Verification, and Open Directions

Key challenges and modern responses include:

  • Locality and Suboptimality: Sequential convex approaches generally guarantee convergence to stationary points, not global optima, though empirical performance is often nearly optimal. Recent MIQCQP-based verification addresses global suboptimality and constraint satisfaction over parametric spaces (Sambharya et al., 13 Nov 2025).
  • Linearization Validity/Trust-Region Shrinkage: Poor surrogate fidelity, especially in highly nonlinear regimes or with ill-chosen trust-region radii, may impede convergence. Adaptive shrinkage and model-based acceptance strategies mitigate this risk.
  • Constraint Qualification Failures: In problems such as MPCC or contact-implicit planning, the lack of classical constraint qualifications precludes standard dual methods; primal-only exact penalty and slack strategies preserve progress (Li et al., 3 Feb 2025).
  • Computational Complexity: While each subproblem is convex, large-scale instances with high-dimensional or logic constraints (e.g., integer, complementarity, sampling) can be computationally demanding. Exploiting problem structure and parallelizing function evaluation and bundle construction ameliorates these costs (Tracy et al., 30 Sep 2025, Kamath et al., 2022).
  • Derivative-Free and Non-Smooth Extensions: Extending sequential convex optimization to non-smooth, black-box, or simulation-driven settings is active, with the sample-based bundle framework providing a principled path (Tracy et al., 30 Sep 2025).

The framework is active in high-dimensional mission design, learning-augmented control, and robust optimization with ongoing research on objective tightness, global certification, and algorithmic acceleration.


References

  • "Sequential Convex Programming Methods for Solving Nonlinear Optimization Problems with DC constraints" (Quoc et al., 2011)
  • "Enhanced Trust Region Sequential Convex Optimization for Multi-Drone Thermal Screening Trajectory Planning in Urban Environments" (Chen et al., 6 Jun 2025)
  • "Online Convex Optimization for Sequential Decision Processes and Extensive-Form Games" (Farina et al., 2018)
  • "GuSTO: Guaranteed Sequential Trajectory Optimization via Sequential Convex Programming" (Bonalli et al., 2019)
  • "Verification of Sequential Convex Programming for Parametric Non-convex Optimization" (Sambharya et al., 13 Nov 2025)
  • "Speed trajectory planning at signalized intersections using sequential convex optimization" (Huang et al., 2017)
  • "Sequential Convex Restriction and its Applications in Robust Optimization" (Lee et al., 2019)
  • "Real-Time Sequential Conic Optimization for Multi-Phase Rocket Landing Guidance" (Kamath et al., 2022)
  • "Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization" (Dinh et al., 2011)
  • "A recursively feasible and convergent Sequential Convex Programming procedure to solve non-convex problems with linear equality constraints" (Virgili-Llop et al., 2018)
  • "Largest small polygons: A sequential convex optimization approach" (Bingane, 2020)
  • "On the Surprising Robustness of Sequential Convex Optimization for Contact-Implicit Motion Planning" (Li et al., 3 Feb 2025)
  • "The Trajectory Bundle Method: Unifying Sequential-Convex Programming and Sampling-Based Trajectory Optimization" (Tracy et al., 30 Sep 2025)
  • "Sequential Convex Programming with Filtering-Based Warm-Starting for Continuous-Time Multiagent Quadrotor Trajectory Optimization" (Yuan et al., 19 Aug 2025)
  • "Parameters Optimization in Trajectory Planning Using Diffrentiable Convex Programing" (Xu et al., 3 Dec 2025)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sequential Convex Optimization Framework.