Chance-Constrained Stochastic Control
- Chance-constrained stochastic control is a mathematical framework that optimizes closed-loop performance by enforcing probabilistic state and input constraints.
- It utilizes deterministic reformulations like probabilistic reachable tubes and expectation-plus-margin rules to transform intractable chance constraints into convex or biconvex programs.
- Applications in process control, robotics, and urban infrastructure demonstrate its effectiveness in balancing safety, performance, and risk under uncertainty.
A chance-constrained stochastic control framework is a mathematical architecture for optimizing closed-loop performance of stochastic dynamical systems under explicit probabilistic state and input constraints. Unlike robust or deterministic approaches, such frameworks guarantee that constraint violations remain rare, as quantified by precise probability bounds—typically enforcing that state or input constraints are satisfied with probability at least at each time or jointly over a trajectory. The formulation is essential in applications subject to safety, comfort, or reliability requirements amid uncertainty, including process control, power systems, robotics, and urban infrastructure. Contemporary frameworks encompass both model-based and data-driven methodologies, with various deterministic reformulations for tractable synthesis under assumptions on disturbance statistics. The following sections synthesize the core developments and methodologies of chance-constrained stochastic control, drawing on rigorous contributions such as (Engelaar et al., 2023, Svensen et al., 2020, Okamoto et al., 2018, Kato et al., 26 Jan 2026), and related works.
1. Mathematical Formulation and Problem Structure
A canonical framework considers a linear discrete-time system with additive stochastic disturbance,
where is the state, the control, and is an i.i.d. disturbance, often assumed to have , , and be central convex unimodal or, in special cases, Gaussian (Engelaar et al., 2023). The admissible state and input regions are convex, typically polytopic.
Chance constraints are imposed pointwise-in-time, e.g.,
with desired levels , or jointly over a finite horizon. The objective is to synthesize a causal, possibly time-varying control policy minimizing an expected quadratic cost,
subject to the chance constraints. In many advanced formulations, additional design flexibility is introduced via relaxed probability levels , which are themselves decision variables to be optimized (Engelaar et al., 2023).
2. Deterministic Reformulation via Probabilistic Reachable Tubes and Other Analytic Tightenings
Because chance constraints are not directly tractable—and typically depend on the full probability distribution of the disturbance—practical frameworks rely on deterministic reformulation strategies.
Probabilistic Reachable Tubes (PRTs) and Tube-based MPC
The core idea is to decompose the state as , where denotes the nominal (deterministic) trajectory and the stochastic error, which evolves under a stabilizing feedback as . For any target probability , the error occupies a probabilistic reachable set : e.g., an ellipsoid , where or, in the Gaussian case, determined by the inverse- quantile (Engelaar et al., 2023).
The fundamental constraint reformulation is
where denotes the Pontryagin difference, and , are error tubes matched to the required chance levels (Engelaar et al., 2023). This tube-based tightening guarantees that if the nominal trajectory satisfies the shrunk constraint sets, then, with prescribed probability, the true (stochastic) trajectory remains within the constraint set.
Other Analytic Tightening Techniques
Alternative deterministic reformulations include:
- Expectation-plus-margin rules for linear-Gaussian dynamics and affine constraints, exploiting quantile functions (e.g., for normal ) (Svensen et al., 2020).
- Biconvex formulations using moment and unimodality assumptions, drawing on inequalities such as Vysochanskij–Petunin, with deterministic constraints of the form with calibrated to the risk allocation (Priore et al., 2023, Priore et al., 2023).
- Distributionally robust surrogates employing convex bounds (e.g., Cantelli, Hoeffding) or Wasserstein balls for unknown or nonparametric disturbance distributions (Kato et al., 26 Jan 2026, Kordabad et al., 2024).
These analytic reformulations are the basis for tractable convex or biconvex programs (LPs, QPs, SOCPs, SDPs), or difference-of-convex programs with iterative solvers.
3. Synthesis Algorithms and Optimization Decompositions
The synthesis of optimal controllers under chance constraints typically proceeds via a two-stage or joint optimization.
Two-Stage Decomposition (Safety and Performance)
(Engelaar et al., 2023) introduces a two-stage scheme:
- Safety Stage (Stage 1): Maximize feasible chance levels by minimizing a relaxation cost , subject to deterministic reformulations of chance constraints.
- Performance Stage (Stage 2): Given the maximal feasible chance levels, synthesize a controller (tube-MPC) minimizing the expected quadratic cost under the same constraint tightenings.
For finite-horizon MPC, both stages can be formulated as LPs or QPs owing to the affine or convex structure of the tightened constraints.
Joint Optimization and Risk Allocation
In advanced convex chance-constrained frameworks (Kato et al., 26 Jan 2026), the allocation of risk (violation probabilities) across scalar constraints is incorporated as an optimization variable, resulting in strictly convex programs that ensure uniqueness and regularity of the solution. This enables adaptive allocation of conservatism according to constraint criticality.
4. Implementation Strategies: Reachable Set Representations and Zonotopic Methods
The computational bottleneck in tube-based MPC is manipulation of set differences, particularly for high-dimensional ellipsoids and polytopic state/input constraints. (Engelaar et al., 2023) proposes explicit zonotopic over-approximations of error ellipsoids, yielding a hierarchy:
- Over-approximate ellipsoid by a zonotope,
- Convert to half-space (facet) representation,
- Enumerate the vertices,
- Restate as a finite intersection of shifted polytopes.
This pipeline results in linear inequality descriptions of the tightened constraints, allowing full LP/QP reformulations.
5. Theoretical Guarantees: Feasibility, Recursive Satisfaction, and Optimality
Two principal guarantees are established within these frameworks:
- Recursive Feasibility: If the MPC optimization is feasible at the initial time, the tube-MPC law preserves feasibility at all future steps. This property is ensured by the positive invariance of terminal sets under the feedback law and the structure of the tightened constraints (Engelaar et al., 2023).
- Closed-Loop Satisfaction of Chance Constraints: The construction ensures that, in closed loop, the stochastic state and input trajectories satisfy the original probabilistic constraints at all times, conditioned on the system being initialized in a feasible state.
In convex formulations that admit strict convexity (as in (Kato et al., 26 Jan 2026)), global optimality, uniqueness, and continuity with respect to parameters are ensured, with robustness to uncertainty in specifications and model parameters.
6. Applications and Illustrative Examples
Framework instantiations include:
- SMPC for constrained DCDC converter regulation: Demonstrating minimal necessary relaxation on chance constraints and achieving closed-loop satisfaction with the tightest feasible margins (Engelaar et al., 2023).
- CC-MPC for urban drainage under rainfall forecast uncertainty: Exploiting moment propagation and margin rules to guarantee tank-level, overflow, and actuator limits with probabilistic certitude, achieving near-parity with perfect-forecast deterministic MPC in nominal scenarios, and superior robustness to forecast bias (Svensen et al., 2020).
- Convex chance-constrained MPC for hybrid automotive powertrains under specification uncertainty: Implementing monotonic risk allocation to balance constraint violation probabilities among conflicting requirements (e.g., SoC, torque limits), validated by empirical violation statistics over extensive Monte Carlo trials (Kato et al., 26 Jan 2026).
These applications illustrate the flexibility and computational tractability of modern chance-constrained stochastic control, both in high-dimensional, real-time, and safety-critical domains.
7. Extensions and Related Paradigms
Contemporary directions include:
- Adaptive relaxation and non-conservative scheduling: Data-driven online tuning of relaxation margins to asymptotically match allowed violation rates, yielding non-conservative trade-offs between constraint violations and economic objectives without requiring parametric disturbance knowledge (Ghosh et al., 2024).
- Covariance control under chance constraints: Extending beyond mean trajectories, formulations steer the system's probability distribution and covariance subject to chance constraints, requiring coupled, often semidefinite, convex optimization (Okamoto et al., 2018, Pilipovsky et al., 2023).
- Data-driven and distribution-free methods: Techniques employing kernel distribution embeddings (Thorpe et al., 2022), sample-conformal prediction (Vogel et al., 11 Dec 2025), and explicit sample-statistics with almost-sure guarantees (Priore et al., 2023) eliminate reliance on parametric disturbance models and enable provable constraint satisfaction from finite data.
- Distributionally robust frameworks utilizing Wasserstein balls and uncertainty sets: These frameworks guarantee constraint satisfaction across all plausible distributions within an ambiguity set, providing rigorous out-of-sample safety (Kordabad et al., 2024).
The interplay between computational tractability, tightness of conservatism, parameterization of uncertainty, and closed-loop robustness continues to drive advancements in chance-constrained stochastic control frameworks.