Shrinking Disturbance-Invariant Tubes in MPC
- The paper introduces shrinking disturbance-invariant tubes that adapt dynamically to reduce conservatism in robust MPC under bounded uncertainty.
- It leverages zonotopic, polyhedral, and system-level formulations combined with learning-based updates to optimize tube contraction.
- The methodology guarantees recursive feasibility and exponential stability by ensuring error bounds contract over data epochs.
Shrinking disturbance-invariant tubes are set-valued constructs used in robust and learning-based model predictive control (MPC) frameworks to ensure constraint satisfaction and robust positive invariance in the presence of bounded model uncertainty and external disturbances. These tubes evolve dynamically, with their cross-sections shrinking over time or data epochs as model uncertainty is reduced or as the control policy becomes more precise. Central to this development are methodologies leveraging zonotopic, polytopic, and system-level disturbance reachable sets, often integrating learning-based or adaptive updating schemes to systematically reduce conservatism. This article details principal formulations, invariance and contraction conditions, learning-based tube tightening, system-level tubes, and guarantees for recursive feasibility and stability. Key references include “Safe Navigation with Zonotopic Tubes: An Elastic Tube-based MPC Framework” (Ghiasi et al., 24 Dec 2025), “Learning-Based Shrinking Disturbance-Invariant Tubes for State- and Input-Dependent Uncertainty” (Ramadan et al., 16 Jan 2026), and “System Level Disturbance Reachable Sets and their Application to Tube-based MPC” (Sieber et al., 2021).
1. Disturbance-Invariant Tubes: Formalism and Definition
Consider a discrete-time linear system subject to additive disturbances: where , , and is the disturbance set. A tube is a time-indexed sequence of sets such that the deviation (actual versus nominal state) satisfies at each time , and the true state/input trajectory respects hard constraints: where is a (possibly adaptive) ancillary feedback gain, and denotes the Minkowski sum.
A tube is disturbance-invariant if, for all admissible and , the error propagation under the closed-loop dynamics remains within subsequent tubes: Shrinking tubes ensure for some contraction factor , thus volume or width decays over time.
2. Zonotopic and Polyhedral Tube Construction
One approach for constructing shrinking disturbance-invariant tubes leverages zonotopic or polyhedral representations. Zonotopes permit computationally efficient set operations and are broadly used for over-approximating disturbances and reachable sets (Ghiasi et al., 24 Dec 2025).
Given a disturbance zonotope
the error tube at time is represented as
with fixed facet matrix and time-varying scale vector .
A set is -contractive if
for some . Verification is performed facet-wise: Adaptive tube contraction and gain tuning are achieved via a linear program that jointly optimizes , , and , ensuring both invariance and minimal tube width (Ghiasi et al., 24 Dec 2025).
3. Learning-Based Tube Tightening for State/Input-Dependent Uncertainty
Real-world applications often encounter disturbances whose bounds depend on the current state and input. This regime invalidates classical invariant set iterations due to circular dependency: verifying invariance for a given tube requires evaluating all disturbances supported on the tube itself. A learning-based solution is provided in (Ramadan et al., 16 Jan 2026), where Gaussian Process (GP) regression models the disturbance mean and variance conditional on .
The GP-generated disturbance credible region, an ellipsoid, is outer-approximated by a polytope: with support functions explicitly derived from the GP posterior. As more data are gathered, posterior variance decreases, and the polytopic wrapper tightens.
A lifted, isotone fixed-point operator is defined in a graph space of triples , with the update map
Iterative application produces a decreasing sequence converging to a disturbance-invariant fixed point, whose projection yields the RPI tube cross-section. This two-time-scale architecture separates outer learning epochs (where polytopes are frozen and tightened) from inner, contraction-based tube refinement.
As epochs advance and data accumulate, the resulting tubes nest monotonically and shrink, formalizing the "shrinking tube" property under state/input-dependent uncertainty (Ramadan et al., 16 Jan 2026).
4. System Level Disturbance Reachable Sets (SL-DRS) and FIR-Based Shrinking Tubes
System Level Disturbance Reachable Sets (SL-DRS) (Sieber et al., 2021) dispense with the need for recursive set invariance conditions by parametrizing all affine, time-varying stabilizing controllers over a finite time horizon through the system level parameterization (SLP). Here, the error dynamics under closed-loop feedback are represented by the system response matrices , with the disturbance impact at horizon step given by
yielding a sequence .
Imposing a finite impulse response (FIR) constraint () causes the disturbance impact to vanish beyond step , making the tube constant in the tail but facilitating shrinking cross-sections up to the terminal time. If the blocks are contractive, the incremental difference sets also contract, guaranteeing that cross-sections decrease towards the horizon.
Both online (concurrent with nominal trajectory optimization) and offline (precomputed) SL-DRS-based tube-MPC formulations are available. Offline computation proceeds by selecting to minimize worst-case tightening, then storing cross-sections for real-time use.
5. Recursive Feasibility, Contractivity, and Stability Guarantees
Shrinking disturbance-invariant tubes fundamentally address recursive feasibility and exponential stability in MPC. The adaptive or learning-driven contraction ensures that, provided feasibility at an initial time, the subsequent tube tightening at each time step is less conservative, thus enlarging the feasible region for the nominal MPC. Stability is typically certified via a Lyapunov candidate adapted to the tube structure. For example, in (Ghiasi et al., 24 Dec 2025), a polyhedral Lyapunov function
shows strict geometric decay under contraction and bounded convergence under disturbance.
In the learning-based setting, tube invariance is certified at the desired coverage level (e.g., with probability at least ), and the monotonic tube nesting ensures that hard constraints remain satisfied even as the tubes shrink across epochs (Ramadan et al., 16 Jan 2026).
6. Comparative Summary and Implementation Considerations
The following table summarizes salient features across principal approaches:
| Approach | Tube Adaptation | Disturbance Model |
|---|---|---|
| Zonotopic LP co-design (Ghiasi et al., 24 Dec 2025) | Adaptive, facet-scaling ()-contractive | Zonotope, data- and constraint-driven |
| GP-based lifting (Ramadan et al., 16 Jan 2026) | Learning-driven, monotonic, two-time-scale shrinking | Polytopic, state- and input-dependent |
| SL-DRS + FIR (Sieber et al., 2021) | Contractive finite-horizon (shrinking to tail) | General convex set |
Implementations must balance computational tractability (facet count, set operations), online/offline decomposition, and guarantees for real-time responsiveness. Computing or updating supporting hyperplanes, managing GP complexity, and handling high-dimensional tubes are central concerns. A plausible implication is that further scaling of learning-based tubes to nonlinear plants or colored disturbances will require additional theoretical and algorithmic advances.
7. Applications and Research Directions
Shrinking disturbance-invariant tubes have advanced the design of robust, adaptive MPC for systems where model and disturbance uncertainty decrease over time, including autonomous navigation, aerospace control, and robotics (Ghiasi et al., 24 Dec 2025, Ramadan et al., 16 Jan 2026, Sieber et al., 2021). Their structure supports the systematic exploitation of new data, prior physical knowledge, and contractive closed-loop dynamics to minimize conservatism and realize provable safety in safety-critical applications.
Future research is expected to focus on efficient GP updating, refinement of state/input-dependent uncertainty wrappers, extension to nonlinear and hybrid systems, and the integration of system level and learning-based paradigms. Rigorous scalability and real-time performance remain open technical challenges.