Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 91 tok/s
Gemini 3.0 Pro 46 tok/s Pro
Gemini 2.5 Flash 148 tok/s Pro
Kimi K2 170 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Constrained Covariance Steering

Updated 17 November 2025
  • Constrained Covariance Steering (CSS) is a stochastic optimal control method that simultaneously steers the state mean and covariance, enabling explicit management of uncertainty.
  • It converts joint chance constraints into tractable convex formulations using Gaussian and Cantelli bounds, resulting in a single SDP for efficient computation.
  • When integrated within model predictive control, CSS ensures recursive feasibility, improved runtime, and robust safety guarantees in practical, high-noise applications.

Constrained Covariance Steering (CSS) refers to a family of stochastic optimal control methodologies in which the controller explicitly steers both the mean and the covariance of a system’s state distribution while satisfying probabilistic (chance) constraints on states and/or controls. CSS generalizes the classic mean-centric approach by making covariance a controlled quantity, enabling explicit management of uncertainty propagation and robust satisfaction of safety requirements in the presence of stochastic disturbances. In contrast to pointwise robust or min-max control, CSS typically seeks to achieve terminal distributions or state tubes, subject to joint probabilistic safety guarantees defined over polytopic constraint sets.

1. Fundamentals and Mathematical Formulation

Consider a discrete-time, linear stochastic system subject to additive Gaussian noise: xk+1=Axk+Buk+wk,wkN(0,Q)x_{k+1} = A x_k + B u_k + w_k, \qquad w_k \sim \mathcal N(0, Q) with x0N(μ0,Σ0)x_0 \sim \mathcal N(\mu_0, \Sigma_0). The state and control are subject to polytopic chance constraints, e.g.,

X={x:αx,ixβx,i, i=1,,Ns},U={u:αu,juβu,j, j=1,,Nc}X = \{ x : \alpha_{x,i}^\top x \leq \beta_{x,i}, ~ i=1,\dots,N_s \}, \quad U = \{ u : \alpha_{u,j}^\top u \leq \beta_{u,j}, ~ j=1,\dots,N_c \}

which are enforced in a joint sense over the horizon: Pr(xkX)1ϵx,Pr(ukU)1ϵu\Pr(x_k \in X) \geq 1 - \epsilon_x, \qquad \Pr(u_k \in U) \geq 1 - \epsilon_u

The CSS problem is to design a disturbance-feedback (or affine) control policy, mapping noise histories to controls, that steers (μ0,Σ0)(μf,Σf)(\mu_0, \Sigma_0) \to (\mu_f, \Sigma_f) at terminal time NN, while minimizing a convex quadratic cost and subject to chance constraints.

The canonical finite-horizon formulation is:

  • Decision variables: {vk,Kk}k=0N1\{v_k, K_k\}_{k=0}^{N-1} for controller of the form

uk=vk+Kkyk,yk+1=Ayk+Dwk, y0=x0μ0u_k = v_k + K_k y_k, \quad y_{k+1} = A y_k + D w_k, ~ y_0 = x_0 - \mu_0

  • Mean and covariance dynamics:

μk+1=Aμk+Bvk,Σk+1=(A+BKk)Σy(A+BKk)\mu_{k+1} = A \mu_k + B v_k, \qquad \Sigma_{k+1} = (A + B K_k) \Sigma_y (A + B K_k)^\top

where Σy=AΣ0A+DD\Sigma_y = A \Sigma_0 A^\top + DD^\top.

  • Quadratic cost:

J=k=0N1[μkQμk+Tr(QΣk)+vkRvk+Tr(KkRKkΣy)]J = \sum_{k=0}^{N-1} \left[ \mu_k^\top Q \mu_k + \operatorname{Tr}(Q \Sigma_k) + v_k^\top R v_k + \operatorname{Tr}(K_k^\top R K_k \Sigma_y) \right]

  • Convexified chance constraints: Each face ii of XX is reformulated via Boole and Cantelli (or Gaussian) inequalities into second-order-cone constraints:

αx,iμk+Φ1(1px,i)Σk1/2αx,iβx,i,ipx,iϵx\alpha_{x,i}^\top \mu_k + \Phi^{-1}(1-p_{x,i}) \|\Sigma_k^{1/2} \alpha_{x,i}\| \leq \beta_{x,i}, \quad \sum_i p_{x,i} \leq \epsilon_x

  • Terminal constraints: μN=μf\mu_N = \mu_f, ΣNΣf\Sigma_N \succeq \Sigma_f.

This yields an SDP with O(Nnxnu)O(N n_x n_u) variables and O(N)O(N) LMIs and SOCCs, solvable in polynomial time (scaling cubically with horizon NN) using off-the-shelf solvers (Okamoto et al., 2019).

2. Chance Constraint Reformulation and Convexity

Chance constraints over polytopic sets are made tractable in CSS by decomposing the joint constraint into individual facet constraints using Boole's inequality, and then tightening each with Gaussian or Cantelli bounds. For scalar linear functions of xx, αx\alpha^\top x, and xN(μ,Σ)x \sim \mathcal N(\mu, \Sigma): Pr(αxβ)1p\Pr(\alpha^\top x \leq \beta) \geq 1 - p is equivalently: αμ+Φ1(1p)Σ1/2αβ\alpha^\top \mu + \Phi^{-1}(1-p) \|\Sigma^{1/2} \alpha\| \leq \beta

When system or noise distributions are non-Gaussian, Cantelli's inequality provides a more conservative but distributionally robust reformulation (Knaup et al., 2023, Renganathan et al., 2022). For systems with multiplicative noise or parametric uncertainty, LMI relaxations and variable-lifting strategies are employed to maintain convexity; block-LMI constraints handle lifted covariance terms.

Convexity is central: the CSS program can be cast as a single SDP, as opposed to the nested or sequential nonconvex formulations found in disturbance-feedback SMPC or robust MPC. Critical constraints—mean/covariance recursions, chance constraints, and terminal set membership—are all convex in (vk,Kk)(v_k, K_k). This guarantees efficient, tractable synthesis and deployability.

3. CSS in Model Predictive Control (Receding Horizon)

CSS is embedded within Stochastic Model Predictive Control (SMPC) by solving a finite-horizon CSS problem at each time, applying only the first control, re-measuring the state, and updating the belief. Recursive feasibility is ensured by imposing a terminal mean-covariance invariant set: μNXfμ,ΣNΣf\mu_N \in \mathcal X_f^\mu, \quad \Sigma_N \succeq \Sigma_f with Xfμ\mathcal X_f^\mu invariant under the feedback K~\tilde K solving

Σf=(A+BK~)Σf(A+BK~)+DD\Sigma_f = (A + B \tilde K)\Sigma_f(A + B \tilde K)^\top + DD^\top

This construction guarantees that the controller can be re-solved at every step and the overall closed-loop satisfies constraint satisfaction and mean/covariance assignments at all times (Okamoto et al., 2019).

At each time kk:

  1. Measure xkx_k and form current (μk,Σk)(\mu_k, \Sigma_k).
  2. Solve the finite-horizon CSS SDP for horizon NN, returning {vtk,Ktk}\{v_{t|k}, K_{t|k}\} for t=k,,k+N1t = k,\dots,k+N-1.
  3. Apply uk=vkk+Kkk(xkμk)u_k = v_{k|k} + K_{k|k} (x_k - \mu_k).

This receding-horizon CSS-based SMPC (CS-SMPC) enjoys lower computational costs per step (e.g., 30% per-step reduction demonstrated on simple 2D systems) compared to disturbance-feedback SMPC, due to the block-diagonal feedback structure and smaller optimization variable counts.

4. Stability, Recursive Feasibility, and Unbounded Noise

CSS maintains closed-loop stability (in the sense of bounded average stage cost) and recursive feasibility even in the presence of unbounded (Gaussian) process noise:

  • Stability: Follows from standard analysis with a terminal cost and Lyapunov arguments for the mean, leveraging the terminal set and feedback invariance properties.
  • Recursive feasibility: The terminal mean-covariance invariant set ensures that the CSS subproblem at each time step is feasible provided the previous step was feasible, despite the possible realization of unbounded noise.
  • Handling unbounded noise: Gaussian additive noise is managed by direct steering of the covariance trajectory and explicit constraint tightening based on the chosen violation probability, obviating the need for explicit robust tubes or over-conservative disturbance sets.

5. Comparative Advantages and Applications

CSS offers key advantages over classical SMPC and robust approaches:

  • Direct covariance control: Eliminates the need for ad-hoc tube parameters or conservative disturbance-feedback parameterizations.
  • Convexity and computational tractability: A single convex program is solved rather than a sequence of nonconvex or hybrid programs.
  • Guaranteed probabilistic constraint satisfaction: Explicit shaping of the propagated covariance ensures prescribed safety probabilities can be robustly enforced—even as state and control constraints are formulated as chance constraints.
  • Reduced conservatism: Absence of tube conservatism and the ability to shape the terminal covariance (e.g., via LMI tools), as shown in race-car path tracking and other applications, enables tighter reference tracking and minimal lap-time while maintaining high confidence in safety margins (Okamoto et al., 2019).

Applications span from simple linear unstable systems to high-dimensional linearized vehicle models with state– and input–dependent constraints, recursive feasibility, and Laplace/Kalman output filter corrections. Empirical results demonstrate reduced runtime and tighter constraint satisfaction as compared to competing SMPC variants.

6. Notable Extensions and Open Issues

CSS extends directly to systems with:

  • Additive/multiplicative noise: Convex relaxations and proper lifting ensure CSS remains tractable in the presence of parametric uncertainties (Knaup et al., 2023).
  • Output feedback: Integration with Kalman filtering, where constraints and feedback policy design explicitly depend on estimation error and process/measurement noise structures (Ridderhof et al., 2020).
  • Nonlinear dynamics: Sequential convexification, operator splitting, and stochastic optimization strategies are used where system nonlinearities or contact-rich phenomena are present (Ratheesh et al., 18 Nov 2024, Shirai et al., 2023).
  • Distributional robustness: Cantelli-based risk allocation and moment-based ambiguity sets allow CSS to provide safety guarantees even under non-Gaussian disturbances (Renganathan et al., 2022).

Open issues include optimal risk allocation across constraints (to reduce conservatism), control with strict input hard bounds, and scaling to higher dimensions or more general ambiguity descriptions. Practical extensions include integration with learning-based prediction, model identification, and online receding-horizon control in safety-critical robotics and autonomous systems.


Table: CSS Core Features vs. Traditional SMPC

Feature CSS Classical SMPC
Covariance control Direct, explicit Indirect, conservative
Constraint type Chance constraints (joint/Boole) Tube-based or pointwise
Convexity Single SDP (polynomial time) Often sequential/non-convex
Computational cost O(N3)O(N^3) (in horizon length) Higher (block-lifting, tubes)
Recursive feasibility Guaranteed by terminal invariant set Not always explicit
Applications Stochastic, uncertain, contact-rich Less suitable for high-noise

7. Representative Numerical Results

Empirical studies cited in (Okamoto et al., 2019) show:

  • Compliance with tight chance constraints (e.g., P([2 1]x2.5)1103P([ -2~1 ] x \leq 2.5) \geq 1-10^{-3}) in unstable 2D systems.
  • 30% per-step runtime reduction versus disturbance-feedback SMPC in the above setting.
  • Race-car tracking under process noise: explicit covariance steering enables the mean trajectory to skirt the boundaries for lap time minimization, meeting lateral-error chance bounds at horizon end with high probability, enabled by direct Σf\Sigma_f shaping via LMI methods.
  • Absence of trial-and-error tuning for cost matrices: chance constraint satisfaction and trajectory shaping are handled in a principled, convex-optimization fashion.

In summary, Constrained Covariance Steering provides a systematic, convex, and computationally efficient framework for joint mean and covariance control for stochastic linear systems under explicit state and control chance constraints, delivering tractable solutions with high confidence safety margins in practical scenarios.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Constrained Covariance Steering (CSS).