Sequential Linear Programming: Methods & Applications
- Sequential Linear Programming (SLP) is an iterative optimization method that linearizes nonlinear objectives and constraints using Taylor expansion and trust-region techniques.
- It constructs and solves LP subproblems at each iteration, leveraging active-set updates to efficiently handle feasibility and non-smooth structures.
- Variants like FSLP and afSLP extend SLP for real-time, robust applications, demonstrating superior performance in robotics, energy, and process engineering benchmarks.
Sequential Linear Programming (SLP) is a class of iterative algorithms for solving nonlinear programs (NLPs) and mathematical programs with equilibrium or complementarity constraints (MPECs or MPCCs), in which a sequence of linear programming subproblems approximates the nonlinear constraints and objective, subject to trust-region globalization mechanisms and active-set updates. SLP is characterized by its reliance on local linearization, its low per-iteration computational complexity, and its flexibility in enforcing feasibility and handling non-smooth or degenerate structures.
1. Core Algorithmic Framework
At each iteration, SLP constructs a linear program (LP) by Taylor expansion of the objective and constraints around the current iterate. The generic subproblem takes the form
with the trust-region radius. The solution determines the next iterate if sufficient model decrease is achieved, according to a ratio test between predicted and actual decrease. Trust-region update strategies serve to ensure global convergence, while maintaining constraint satisfaction and bounding step lengths.
For bound-constrained MPCCs, each iteration solves an LPCC (linear program with complementarity constraints) whose feasible set is the product of box constraints and 2D “crosses” enforcing ; efficient enumeration of corner points yields complexity per subproblem (Kirches et al., 2020). In classical SLP for smooth NLPs, both equality and inequality constraints are linearized, leading to a trust-region LP that is solved for descent directions (Kiessling et al., 2022, Kiessling et al., 2022).
2. Trust-Region Globalization and Active-Set Estimation
SLP employs trust-region constraints () to stabilize steps, prevent divergence, and ensure robust progress. The two main globalization paradigms are:
- Merit-function ratio test: Accept step if the actual reduction exceeds a fraction of the predicted reduction .
- Active-set estimation: The structure of the LP solution identifies active bounds or complementarity pairs, informing the selection of which constraints to treat as equality in subsequent subproblems.
For MPCCs, after each LPCC step, the partitioned sets of active inequalities and complementarity components are updated (Kirches et al., 2020). In robust SLP frameworks, feasibility-refinement inner loops project infeasible steps back into the feasible set, either exactly (FSLP) or within a tolerance-tube (afSLP), via sequences of parametric LPs operating on zero-order constraint information (Kiessling et al., 2022, Kiessling et al., 24 Jan 2024, Kiessling et al., 2022).
3. Convergence Theory and Stationarity
Standard SLP under regularity assumptions converges locally linearly to KKT points in smooth NLPs (Kiessling et al., 2022). For bound-constrained MPCCs, the relevant notion of stationarity is B-stationarity: a point is B-stationary if minimizes the linearized objective subject to both bounds and linearized complementarity constraints (Kirches et al., 2020). Global convergence to B-stationary points is guaranteed under Lipschitz continuity.
When active-set identification is correct, SLP algorithms equipped with a quadratic-programming refinement stage (BQP/SQP) can achieve local superlinear convergence (Kirches et al., 2020). Feasible SLP methods attain global convergence via trust-region contraction, projection-ratio criteria, and filter-type switches (Kiessling et al., 2022, Kiessling et al., 24 Jan 2024). Anderson acceleration applied to the feasibility-inner fixed-point process yields improved contraction rates and locally linear convergence (Kiessling et al., 2022).
4. SLP Variants and Extensions
Several prominent SLP variants address specific challenges:
- Feasible SLP (FSLP): Ensures strict feasibility of all outer iterates via inner loop feasibility restoration, based on zero-order constraint evaluations (Kiessling et al., 2022, Kiessling et al., 2022). This enables early termination with feasible, suboptimal solutions—vital in real-time and safety-critical applications.
- Almost Feasible SLP (afSLP): Permits iterates within a relaxed tube around the feasible set, reducing per-iteration feasibility cost and enabling infeasible initialization (Kiessling et al., 24 Jan 2024). Tolerance-tube width is adaptively shrunk as feasibility is approached, controlled via a dual merit-filter and restoration LPs.
- Anderson Accelerated FSLP: Applies Anderson acceleration of arbitrary depth to the inner feasibility iterations, improving contraction and reducing total constraint evaluations, with significant speed-ups in robotics/NMPC benchmarks (Kiessling et al., 2022).
- SLP for Complementarity and MPECs: Coordinate-decomposed LPCC subproblems exploit the inherent 2D “cross” structure, yielding solution time (Kirches et al., 2020).
5. Application Domains and Benchmark Results
SLP methodologies are employed in diverse application areas:
- Time-optimal control and trajectory planning (robotics, mechatronics, vehicles): FSLP, afSLP, and Anderson-accelerated FSLP have demonstrated superior efficiency, scaling linearly with problem size and outperforming interior-point solvers such as IPOPT in constraint evaluations and wall time when applied to discretized SCARA robot models (Kiessling et al., 2022, Kiessling et al., 24 Jan 2024).
- Pooling problem (process engineering): Multi-start SLP, especially when paired with the qq-formulation (proportional variables only), is empirically more efficient than SQP and IPOPT for finding high-quality solutions within practical time budgets, with hundreds-fold reductions in expected time to solution in industrial benchmarks (Grothey et al., 2020).
- AC optimal power flow (energy markets): An SLP algorithm solving a sequence of LPs with supporting hyperplane and halfspace cuts achieves AC-feasible solutions on large test cases (up to 3,375 buses), matching NLP solver accuracy (optimality gaps ≈%, constraint violation ≈) and exhibiting robust convergence from arbitrary initializations (Sleiman et al., 2021).
- Trajectory optimization for compliant robotics: Actuator-centered SLP with separation of linear actuator dynamics and nonlinear robot impedance dynamics, equipped with pseudo-mass tuning for discretization accuracy, enables fast convex subproblem solves and leverages compliance for higher performance (Schlossman et al., 2018).
- Data-driven computational mechanics: SLP algorithms with adaptive convex-hull trust-regions reliably quantify uncertainty in structural response bounds via efficient LPs, robust to noise and outliers, with convergence demonstrated on large truss and FE models (Huang et al., 2022).
Table: Representative SLP Algorithmic Variants
| Variant | Feasibility Guarantee | Stationarity Target |
|---|---|---|
| Classical SLP (NLP) | No (outer steps may be infeasible) | KKT |
| FSLP | Yes (all iterates feasible) | KKT |
| afSLP | Yes (within user ε) | KKT |
| SLPCC (Complementarity) | Yes (strict bounds, complementarity) | B-Stationarity |
| SLP (Uncertainty Analysis) | Yes (response bounds) | None (interval bound) |
6. Complexity, Implementation, and Practical Considerations
SLP subproblems are LPs with trust-regions, leading to low per-iteration complexity. For large-scale problems, the dominant cost shifts to LP solve time and constraint evaluation (inner/outer loops). Efficient enumeration or parametric LPs in feasibility restoration can limit overhead (Kiessling et al., 2022, Kiessling et al., 2022). Adaptive tube-tolerances and filter-switching mechanisms preclude cycling and enable robust progress (Kiessling et al., 24 Jan 2024).
Critical implementation concerns include:
- Selection of initial trust-region radii, inner-loop contraction factors, and cut update rules.
- Management of accumulated cuts (hyperplanes/halfspaces) for nonconvex applications (e.g., OPF), to avoid bloated LP size (Sleiman et al., 2021).
- Interface with commercial LP solvers (CPLEX, Gurobi) to exploit modern simplex/interior-point implementations.
Parameter choices (e.g., hull size , shrinkage factor , tolerances ) influence convergence rate, solution tightness, and LP size in data-driven SLP variants (Huang et al., 2022).
7. Connections, Impact, and Outlook
SLP generalizes the philosophy of sequential quadratic programming (SQP) to LP-only subproblems, offering distinct advantages for large, degenerate, or combinatorial NLPs. Its inherent modularity allows incorporation of advanced techniques such as:
- Active-set refinement via secondary QP subproblems for superlinear convergence (Kirches et al., 2020).
- Feasibility restoration via LP projections ensuring all iterates can be early-stopped with feasibility (Kiessling et al., 2022).
- Acceleration via Anderson summation, dramatically reducing total iterations (Kiessling et al., 2022).
- Model-free uncertainty quantification for data-driven mechanics (Huang et al., 2022).
Empirical studies confirm SLP’s reliability, scalability, and computational efficiency across diverse benchmarks. The method’s strict feasibility enforcement and LP foundation make it attractive for embedded, real-time, and safety-critical optimization settings, as well as for emerging areas in industrial, energy, and robotic applications.
A plausible implication is continued expansion into hybrid and mixed-integer programs, domain-specific variance reduction strategies, and distributed or parallel SLP variants leveraging modern LP solver architectures.