Constrained Covariance Steering
- Constrained Covariance Steering (CSS) is a stochastic optimal control method that simultaneously steers the state mean and covariance, enabling explicit management of uncertainty.
- It converts joint chance constraints into tractable convex formulations using Gaussian and Cantelli bounds, resulting in a single SDP for efficient computation.
- When integrated within model predictive control, CSS ensures recursive feasibility, improved runtime, and robust safety guarantees in practical, high-noise applications.
Constrained Covariance Steering (CSS) refers to a family of stochastic optimal control methodologies in which the controller explicitly steers both the mean and the covariance of a system’s state distribution while satisfying probabilistic (chance) constraints on states and/or controls. CSS generalizes the classic mean-centric approach by making covariance a controlled quantity, enabling explicit management of uncertainty propagation and robust satisfaction of safety requirements in the presence of stochastic disturbances. In contrast to pointwise robust or min-max control, CSS typically seeks to achieve terminal distributions or state tubes, subject to joint probabilistic safety guarantees defined over polytopic constraint sets.
1. Fundamentals and Mathematical Formulation
Consider a discrete-time, linear stochastic system subject to additive Gaussian noise: with . The state and control are subject to polytopic chance constraints, e.g.,
which are enforced in a joint sense over the horizon:
The CSS problem is to design a disturbance-feedback (or affine) control policy, mapping noise histories to controls, that steers at terminal time , while minimizing a convex quadratic cost and subject to chance constraints.
The canonical finite-horizon formulation is:
- Decision variables: for controller of the form
- Mean and covariance dynamics:
where .
- Quadratic cost:
- Convexified chance constraints: Each face of is reformulated via Boole and Cantelli (or Gaussian) inequalities into second-order-cone constraints:
- Terminal constraints: , .
This yields an SDP with variables and LMIs and SOCCs, solvable in polynomial time (scaling cubically with horizon ) using off-the-shelf solvers (Okamoto et al., 2019).
2. Chance Constraint Reformulation and Convexity
Chance constraints over polytopic sets are made tractable in CSS by decomposing the joint constraint into individual facet constraints using Boole's inequality, and then tightening each with Gaussian or Cantelli bounds. For scalar linear functions of , , and : is equivalently:
When system or noise distributions are non-Gaussian, Cantelli's inequality provides a more conservative but distributionally robust reformulation (Knaup et al., 2023, Renganathan et al., 2022). For systems with multiplicative noise or parametric uncertainty, LMI relaxations and variable-lifting strategies are employed to maintain convexity; block-LMI constraints handle lifted covariance terms.
Convexity is central: the CSS program can be cast as a single SDP, as opposed to the nested or sequential nonconvex formulations found in disturbance-feedback SMPC or robust MPC. Critical constraints—mean/covariance recursions, chance constraints, and terminal set membership—are all convex in . This guarantees efficient, tractable synthesis and deployability.
3. CSS in Model Predictive Control (Receding Horizon)
CSS is embedded within Stochastic Model Predictive Control (SMPC) by solving a finite-horizon CSS problem at each time, applying only the first control, re-measuring the state, and updating the belief. Recursive feasibility is ensured by imposing a terminal mean-covariance invariant set: with invariant under the feedback solving
This construction guarantees that the controller can be re-solved at every step and the overall closed-loop satisfies constraint satisfaction and mean/covariance assignments at all times (Okamoto et al., 2019).
At each time :
- Measure and form current .
- Solve the finite-horizon CSS SDP for horizon , returning for .
- Apply .
This receding-horizon CSS-based SMPC (CS-SMPC) enjoys lower computational costs per step (e.g., 30% per-step reduction demonstrated on simple 2D systems) compared to disturbance-feedback SMPC, due to the block-diagonal feedback structure and smaller optimization variable counts.
4. Stability, Recursive Feasibility, and Unbounded Noise
CSS maintains closed-loop stability (in the sense of bounded average stage cost) and recursive feasibility even in the presence of unbounded (Gaussian) process noise:
- Stability: Follows from standard analysis with a terminal cost and Lyapunov arguments for the mean, leveraging the terminal set and feedback invariance properties.
- Recursive feasibility: The terminal mean-covariance invariant set ensures that the CSS subproblem at each time step is feasible provided the previous step was feasible, despite the possible realization of unbounded noise.
- Handling unbounded noise: Gaussian additive noise is managed by direct steering of the covariance trajectory and explicit constraint tightening based on the chosen violation probability, obviating the need for explicit robust tubes or over-conservative disturbance sets.
5. Comparative Advantages and Applications
CSS offers key advantages over classical SMPC and robust approaches:
- Direct covariance control: Eliminates the need for ad-hoc tube parameters or conservative disturbance-feedback parameterizations.
- Convexity and computational tractability: A single convex program is solved rather than a sequence of nonconvex or hybrid programs.
- Guaranteed probabilistic constraint satisfaction: Explicit shaping of the propagated covariance ensures prescribed safety probabilities can be robustly enforced—even as state and control constraints are formulated as chance constraints.
- Reduced conservatism: Absence of tube conservatism and the ability to shape the terminal covariance (e.g., via LMI tools), as shown in race-car path tracking and other applications, enables tighter reference tracking and minimal lap-time while maintaining high confidence in safety margins (Okamoto et al., 2019).
Applications span from simple linear unstable systems to high-dimensional linearized vehicle models with state– and input–dependent constraints, recursive feasibility, and Laplace/Kalman output filter corrections. Empirical results demonstrate reduced runtime and tighter constraint satisfaction as compared to competing SMPC variants.
6. Notable Extensions and Open Issues
CSS extends directly to systems with:
- Additive/multiplicative noise: Convex relaxations and proper lifting ensure CSS remains tractable in the presence of parametric uncertainties (Knaup et al., 2023).
- Output feedback: Integration with Kalman filtering, where constraints and feedback policy design explicitly depend on estimation error and process/measurement noise structures (Ridderhof et al., 2020).
- Nonlinear dynamics: Sequential convexification, operator splitting, and stochastic optimization strategies are used where system nonlinearities or contact-rich phenomena are present (Ratheesh et al., 18 Nov 2024, Shirai et al., 2023).
- Distributional robustness: Cantelli-based risk allocation and moment-based ambiguity sets allow CSS to provide safety guarantees even under non-Gaussian disturbances (Renganathan et al., 2022).
Open issues include optimal risk allocation across constraints (to reduce conservatism), control with strict input hard bounds, and scaling to higher dimensions or more general ambiguity descriptions. Practical extensions include integration with learning-based prediction, model identification, and online receding-horizon control in safety-critical robotics and autonomous systems.
Table: CSS Core Features vs. Traditional SMPC
| Feature | CSS | Classical SMPC |
|---|---|---|
| Covariance control | Direct, explicit | Indirect, conservative |
| Constraint type | Chance constraints (joint/Boole) | Tube-based or pointwise |
| Convexity | Single SDP (polynomial time) | Often sequential/non-convex |
| Computational cost | (in horizon length) | Higher (block-lifting, tubes) |
| Recursive feasibility | Guaranteed by terminal invariant set | Not always explicit |
| Applications | Stochastic, uncertain, contact-rich | Less suitable for high-noise |
7. Representative Numerical Results
Empirical studies cited in (Okamoto et al., 2019) show:
- Compliance with tight chance constraints (e.g., ) in unstable 2D systems.
- 30% per-step runtime reduction versus disturbance-feedback SMPC in the above setting.
- Race-car tracking under process noise: explicit covariance steering enables the mean trajectory to skirt the boundaries for lap time minimization, meeting lateral-error chance bounds at horizon end with high probability, enabled by direct shaping via LMI methods.
- Absence of trial-and-error tuning for cost matrices: chance constraint satisfaction and trajectory shaping are handled in a principled, convex-optimization fashion.
In summary, Constrained Covariance Steering provides a systematic, convex, and computationally efficient framework for joint mean and covariance control for stochastic linear systems under explicit state and control chance constraints, delivering tractable solutions with high confidence safety margins in practical scenarios.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free