Stochastic Model Predictive Control
- Stochastic Model Predictive Control is an optimization method that accounts for full probability distributions of uncertainties to enable risk-calibrated decision making.
- It employs generalized polynomial chaos for efficient uncertainty propagation and sample-based chance constraints to balance performance and robustness.
- The framework provides analytic gradient evaluations and real-time feasibility, offering a less conservative alternative to traditional robust MPC.
Stochastic Model Predictive Control (SMPC) frameworks provide a principled approach for optimizing the closed-loop behavior of systems subject to dynamic uncertainties and probabilistic constraints. Unlike robust or deterministic MPC—where uncertainty is treated in a worst-case or average sense—SMPC explicitly accounts for the full probability distributions of stochastic disturbances, initial conditions, and model parameters. This enables controller designs that systematically trade off performance, robustness, and risk, through rigorous chance-constraint formulations and tailored uncertainty propagation methods. SMPC frameworks have been developed for both linear and nonlinear systems, with broad applicability to process control, energy management, and safety-critical operation in uncertain environments.
1. Uncertainty Propagation in Nonlinear Stochastic MPC
A central challenge in SMPC for nonlinear systems is the forward propagation of uncertainties through nonlinear dynamics. The framework detailed in (Streif et al., 2014) uses generalized polynomial chaos (PC) expansions to perform efficient and accurate uncertainty propagation in systems with time-invariant stochastic parameters and initial conditions. Specifically, any function of the random variable vector is expressed as a series in orthogonal polynomials: where are deterministic coefficients and are orthogonal basis functions indexed by a truncation multi-index set . This technique "lifts" the original random ODE or DAE into an extended deterministic system for the PC coefficients. Two propagation strategies are distinguished:
- Collocation methods: PC coefficients are computed by least-squares fitting to a cloud of sample trajectories.
- Galerkin projection: When the system dynamics are analytic with respect to the states and separable with respect to parameters, the evolution of the PC coefficients can be computed via projection, yielding significant computational savings.
Key statistical moments (means, variances, higher moments) of the system states and outputs are then recovered efficiently from the PC coefficients, enabling rapid sampling and evaluation of chance constraints.
2. Formulation and Enforcement of Chance Constraints
SMPC frameworks impose probabilistic constraints directly on the system evolution, rather than requiring hard satisfaction for all possible uncertainty realizations. In (Streif et al., 2014), chance constraints are specified as
for a given satisfaction probability . These are enforced via sample-average approximations using PC surrogate models: where is the indicator function, and is a statistically 'tightened' threshold correcting for the finite sample size. The constraint tightening is computed using the beta-distribution inverse quantile as follows: where is a designer-specified confidence level. This ensures that, with high probability, the actual satisfaction level meets or exceeds .
3. Computational Methods and Optimization Efficiency
Efficient solution of the resulting SMPC optimization problem requires both rapid uncertainty propagation and efficient evaluation of objective and constraint gradients. The PC expansion enables "pseudo Monte Carlo" sampling via matrix-vector multiplication, which is orders of magnitude faster than brute-force Monte Carlo. Expectation-based objective functions (including higher moments, e.g., for risk-aware design) are computed with negligible cost.
A salient feature of (Streif et al., 2014) is the derivation of an analytic, sample-based gradient of the chance constraints with respect to the decision variables (Proposition 1). For multi-dimensional uncertainty, the gradient is computed by isolating the roots of with respect to a chosen random variable (e.g., ), then leveraging the implicit function theorem: where is the root in . This avoids the inaccuracies and computational cost of finite-difference approximations, particularly important given the discrete nature of sample-based evaluations.
4. Real-Time Implementation and Case Study Evaluation
The framework's practical applicability is demonstrated on the Williams–Otto semi-batch reactor, a benchmark nonlinear process system with seven states and ten uncertain parameters/initial conditions. The SNMPC formulation predicts the propagation of parametric and initial condition uncertainty through nonlinear chemical reaction and dilution dynamics.
The controller runs in a receding/shrinking horizon manner and enforces chance constraints (e.g., on side-product concentration and reactor volume) with confidence. Simulation results indicate that whereas a nominal NMPC (which optimizes for mean values only) leads to constraint violations in a significant fraction of disturbance realizations, the SNMPC maintains high-probability constraint satisfaction by probabilistically 'tightening' the constraints according to the propagated uncertainty.
The framework demonstrates real-time feasibility:
- Time breakdowns for forward propagation, gradient evaluation, and ODE integration are reported, confirming suitability for online control.
- The approach is shown to be less conservative and more systematic than robust NMPC, which must guarantee constraint satisfaction under extreme scenarios and hence may degrade performance.
5. Mathematical Framework Overview
The optimization problem is formally written as:
- System dynamics: ,
- Uncertainties: , random variables with given PDFs
- Cost functional: Written in terms of expected value and higher (central) moments of the state (running and terminal costs)
- Chance constraints:
- Sample-approximate constraint:
- PC expansion: , with ODEs for the coefficients via Galerkin projection.
The framework allows each term (cost, constraints, gradients) to be computed analytically or via efficient sampling of the PC surrogate, underpinning the computational tractability of the entire method.
6. Position Relative to Standard and Robust MPC Approaches
The SNMPC formulation contrasts with:
- Robust MPC, which enforces constraints for all admissible uncertainties (often yielding overly conservative control for wide uncertainty ranges).
- Scenario-based SMPC, which requires large numbers of discrete samples and typically assumes linearity or forces a convex reformulation.
By utilizing the full continuous probability distributions of uncertain parameters and initial conditions via the PC framework and statistically validated sample-based chance-constraint handling, the proposed SNMPC achieves a less conservative, more risk-calibrated control action (i.e., enforce constraints with high but not absolute probability). Further, the efficient analytic and sample-based computational methods allow extension to high-dimensional nonlinear systems in real time.
7. Theoretical Guarantees and Conditions
The framework's statistical confidence in constraint satisfaction is underpinned by precise theorems (Theorems 1 and 2). If the corrected satisfaction probability is chosen according to the explicit inverse beta-distribution quantile, and a prescribed number of samples is used, the chance constraint is guaranteed with confidence : The sufficient condition of system dynamics analyticity and separability ensures applicability of the PC expansion and related methods.
These results demonstrate that the stochastic nonlinear MPC framework based on generalized polynomial chaos expansions with statistically calibrated chance constraints, and analytic gradient computation constitutes a robust, less conservative, and computationally feasible methodology for controlling nonlinear uncertain processes in real time (Streif et al., 2014).