Modified Estimation-Driven Tube-Based MPC
- The topic highlights robust MPC design that adaptively refines tube geometry using real-time estimation to reduce conservatism.
- It jointly optimizes control inputs and tube properties to ensure recursive feasibility and robust constraint satisfaction under uncertainty.
- Applications span linear and nonlinear systems, leveraging techniques such as zonotope arithmetic and deep learning for performance improvement.
Modified estimation-driven tube-based Model Predictive Control (MPC) is a family of robust control schemes that adaptively construct and update invariant tubes around nominal state trajectories by leveraging state, parameter, or disturbance estimates in real time. These frameworks relax the conservatism found in classical tube MPC—wherein tube geometry and tightening are designed offline for worst-case uncertainty—by continuously refining model or disturbance sets using online data. As a result, modified estimation-driven tube-based MPC yields less conservative constraint tightening, improved closed-loop performance, and rigorous guarantees of recursive feasibility and robust stability, even when the true plant falls outside the originally assumed model set or faces time-varying disturbance statistics.
1. Core Principles and Problem Statement
Modified estimation-driven tube-based MPC seeks to maintain robust invariance of real plant trajectories with respect to an adaptive tube in the presence of parametric uncertainty, additive disturbances, and time-varying model error. It does so by:
- Maintaining an outer approximation model or uncertainty set (e.g., polytope, zonotope, or polytopic bundle) for plant parameters and disturbances, denoted for system matrices and for disturbances at step (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025).
- Refining these sets online via set-membership methods or learning-based updates, often using recent state-input-output measurements.
- Jointly optimizing tube geometry (width, cross-section, or generators) and control inputs at each receding horizon step, allowing the tube to "shrink" as model or disturbance uncertainty contracts (Morozov et al., 2020, Lopez et al., 2019).
- Propagating the tube cross-section forward using parameter-dependent, error-dynamics-consistent maps, often using online-computed invariant sets.
- Imposing robustified, state/input tightened constraints to ensure the real plant remains feasible under any admissible realization of the current uncertainty set.
The general plant model is:
or, in nonlinear settings, with parameter- or state-dependent , leveraging structure as needed (Morozov et al., 2020, Lopez et al., 2019, Surma et al., 2023).
2. Online Set-Membership Estimation and Tube Adaptation
At each step, the uncertainty set update is constructed from the most recent plant data and propagated forward:
- Parameter set update: For each tuple , the feasible parameters are those that could have generated the transition within the disturbance bound:
This is performed via polytope or zonotope intersection (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025).
- Disturbance set refinement may also be performed using output residues (Ghiasi et al., 24 Dec 2025), or leveraging learned models (e.g., deep quantile regression, fuzzy models) (Fan et al., 2020, Surma et al., 2023).
- Affine or polyhedral tube geometry: Tube sections are parameterized homothetically (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025, Parsi et al., 2022), as zonotopes (Ghiasi et al., 24 Dec 2025, Alcala et al., 2020), via polytopic bundles or as general parameterized set-valued maps (including learned parametric forms) (Fan et al., 2020, Surma et al., 2023).
- Co-design of tube and ancillary gain: The stabilizing ancillary feedback gain is re-computed at each to minimize required tightening and ensure 0-contractivity (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025).
- Set-membership vs. parametric learning: Set-membership-based approaches provide hard (non-probabilistic) guarantees that the true model lies within the updated uncertainty, while learning-based approaches (e.g., deep quantile tubes (Fan et al., 2020)) typically offer probabilistic safety guarantees.
3. Optimization Problem and Tube Propagation
The receding horizon optimization incorporates the adaptive tubes via coupled constraints:
- Decision variables: Nominal predicted state/input trajectory, tube geometry (cross-section or width at each stage), ancillary feedback gain (optionally), and terminal ingredients.
- Tube dynamics: The tube cross-section propagation is governed by dynamic constraints, e.g.,
1
or in nonlinear/discrete settings,
2
with 3 governed by state-dependent model error (Morozov et al., 2020, Lopez et al., 2019).
- Tightened constraints:
4
or more generally via Pontryagin difference with the current tube/error set.
- Terminal constraints: Terminal sets and costs are adapted online to ensure recursive feasibility and robust invariance for the latest 5 (Dey et al., 16 Mar 2026, Parsi et al., 2022).
In learning-based variants, the tube width at each prediction step is generated from a regressor (e.g., neural network or fuzzy logic model) trained on the history of plant data, mapping current state-action to the anticipated required tube size (Fan et al., 2020, Surma et al., 2023).
4. Theoretical Guarantees
All major frameworks achieve:
- Recursive feasibility: By construction, the adaptive tube and constraints ensure that if the problem is feasible at time 6, it remains feasible under the updated set at all 7 (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025, Parsi et al., 2022).
- Robust constraint satisfaction: The true state/input remain in the tightened constraint sets for all admissible uncertainty, provided set-valued inclusions and propagation laws are satisfied at each step.
- Robust (exponential) stability: Lyapunov arguments—either via stage and terminal cost decrease (Dey et al., 16 Mar 2026, Morozov et al., 2020), or via polyhedral Lyapunov functions and contractive tubes—yield robust asymptotic or practical stability bounds.
- Less conservatism: Online adaptation of model or disturbance sets strictly reduces required tube size and constraint tightening as data accumulates and improves the uncertainty description (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025).
- Probabilistic guarantees (when using probabilistically trained tubes or deep networks): Constraint satisfaction is enforced with a prescribed (high) probability at each stage (Fan et al., 2020).
5. Methodological Variants and Applications
Several distinct instantiations of modified estimation-driven tube-based MPC have been proposed:
| Variant | Tube Construction | Uncertainty/Learning |
|---|---|---|
| Adaptive Tube MPC (Dey et al., 16 Mar 2026) | Polytope/homothetic | Set-membership, polytopic |
| Elastic Tube MPC (Ghiasi et al., 24 Dec 2025) | Polyhedron/zonotope | Zonotopic set-membership |
| State-Dependent Dynamic Tube MPC (Surma et al., 2023) | Polytope, fuzzy logic | Fuzzy disturbance model |
| Deep Learning Tubes (Fan et al., 2020) | Learned (NN) | Quantile regression/deep NN |
| Output-Feedback Tube MPC (Dey et al., 6 Feb 2025) | Two-tier tube | Observer+set-membership |
| Self-Tuning Tube MPC (Tranos et al., 2022) | Polytope (confidence) | LS estimator, confidence |
| System-Level Tube MPC (Sieber et al., 2024) | SLP/convex tubes | Affine filter, online update |
Specific applications include nonlinear and linear systems with time-varying uncertainty, robot manipulators, autonomous vehicles, and search-and-rescue robots (Ghiasi et al., 24 Dec 2025, Luo et al., 2021, Alcala et al., 2020, Surma et al., 2023). Quantitative evaluations consistently demonstrate reduced tube size, relaxed constraint tightening, increased feasibility regions, improved closed-loop performance, and—when appropriate—probabilistic safety margins commensurate with the available data and learning rate.
6. Computational Aspects
Modified estimation-driven tube-based MPC exploits online set updates, tube co-design, and tailored representations (e.g., zonotopes, polytopes, neural tubes) for computational tractability:
- Zonotope arithmetic permits linear-time tube propagation (Ghiasi et al., 24 Dec 2025, Alcala et al., 2020).
- Asynchronous or "secondary" tube optimization schemes decouple tube geometry design from nominal trajectory update to meet real-time constraints (Sieber et al., 2024).
- Learning-based methods (e.g., deep quantile regression) amortize training cost offline and yield real-time tube bounds by table lookup or network evaluation (Fan et al., 2020).
- The online adaptation of tube geometry and gain typically results in similar per-step optimization complexity as classic tube-MPC (with modest increases in variables), but enables longer prediction horizons or faster sampling rates due to less conservative constraint tightening (Ghiasi et al., 24 Dec 2025, Alcala et al., 2020, Fan et al., 2020).
7. Interpretation and Trade-Offs
The adoption of estimation-driven tube-based MPC leads to notable benefits:
- Systematic reduction of conservatism as uncertainty sets are refined.
- Expanded feasible regions and less frequent infeasibility in challenging scenarios.
- Rigorous guarantees of recursive feasibility and robust constraint satisfaction, including in high-dimensional and nonlinear settings.
- Flexibility to incorporate parametric estimation, machine learning, output feedback, and state-dependent or data-driven disturbance models.
However, the efficacy of online set-membership approaches depends on the informativeness of the plant data, persistence of excitation, and adequacy of the learning or identification mechanism. Some learning-based tube construction methods yield probabilistic, rather than hard, guarantees. Computational complexity remains tractable, but nonconvexity can arise in highly flexible or nonlinear parameterizations, so practical deployment may leverage approximate or asynchronous computation schemes (Sieber et al., 2024).
In conclusion, modified estimation-driven tube-based MPC frameworks enable real-time, performance-oriented robust MPC by adaptively shaping the invariant tubes around nominal trajectories, integrating online estimation and learning directly into the control loop, and maintaining the theoretical guarantees of classical tube-based robust MPC under increasingly realistic operating scenarios (Dey et al., 16 Mar 2026, Ghiasi et al., 24 Dec 2025, Morozov et al., 2020, Parsi et al., 2022).