Papers
Topics
Authors
Recent
Search
2000 character limit reached

Shrinking Disturbance-Invariant Tubes in MPC

Updated 23 January 2026
  • The paper introduces shrinking disturbance-invariant tubes that adapt dynamically to reduce conservatism in robust MPC under bounded uncertainty.
  • It leverages zonotopic, polyhedral, and system-level formulations combined with learning-based updates to optimize tube contraction.
  • The methodology guarantees recursive feasibility and exponential stability by ensuring error bounds contract over data epochs.

Shrinking disturbance-invariant tubes are set-valued constructs used in robust and learning-based model predictive control (MPC) frameworks to ensure constraint satisfaction and robust positive invariance in the presence of bounded model uncertainty and external disturbances. These tubes evolve dynamically, with their cross-sections shrinking over time or data epochs as model uncertainty is reduced or as the control policy becomes more precise. Central to this development are methodologies leveraging zonotopic, polytopic, and system-level disturbance reachable sets, often integrating learning-based or adaptive updating schemes to systematically reduce conservatism. This article details principal formulations, invariance and contraction conditions, learning-based tube tightening, system-level tubes, and guarantees for recursive feasibility and stability. Key references include “Safe Navigation with Zonotopic Tubes: An Elastic Tube-based MPC Framework” (Ghiasi et al., 24 Dec 2025), “Learning-Based Shrinking Disturbance-Invariant Tubes for State- and Input-Dependent Uncertainty” (Ramadan et al., 16 Jan 2026), and “System Level Disturbance Reachable Sets and their Application to Tube-based MPC” (Sieber et al., 2021).

1. Disturbance-Invariant Tubes: Formalism and Definition

Consider a discrete-time linear system subject to additive disturbances: x+=Ax+Bu+w,wW,x^+ = A x + B u + w, \quad w \in \mathcal{W}, where xRnx \in \mathbb{R}^n, uRmu \in \mathbb{R}^m, and W\mathcal{W} is the disturbance set. A tube is a time-indexed sequence of sets {Zk}\{Z_k\} such that the deviation e=xxˉe = x - \bar{x} (actual versus nominal state) satisfies eZke \in Z_k at each time kk, and the true state/input trajectory respects hard constraints: xˉkZkX,uˉkKkZkU,\bar{x}_k \oplus Z_k \subseteq \mathcal{X}, \qquad \bar{u}_k \oplus K_k Z_k \subseteq \mathcal{U}, where KkK_k is a (possibly adaptive) ancillary feedback gain, and \oplus denotes the Minkowski sum.

A tube ZkZ_k is disturbance-invariant if, for all admissible eZke \in Z_k and wWw \in \mathcal{W}, the error propagation under the closed-loop dynamics remains within subsequent tubes: Acl(k)e+wZk+1,Acl(k)=A+BKk.A_{\rm cl}(k) e + w \in Z_{k+1}, \quad A_{\rm cl}(k) = A + B K_k. Shrinking tubes ensure Zk+1λZkZ_{k+1} \subseteq \lambda Z_k for some contraction factor 0<λ<10 < \lambda < 1, thus volume or width decays over time.

2. Zonotopic and Polyhedral Tube Construction

One approach for constructing shrinking disturbance-invariant tubes leverages zonotopic or polyhedral representations. Zonotopes permit computationally efficient set operations and are broadly used for over-approximating disturbances and reachable sets (Ghiasi et al., 24 Dec 2025).

Given a disturbance zonotope

W=Zw=Gh,ch={ch+Ghζ:ζ1}Rn,W = \mathcal{Z}_w = \langle G_h, c_h \rangle = \{ c_h + G_h \zeta : \|\zeta\|_\infty \leq 1 \} \subset \mathbb{R}^n,

the error tube at time kk is represented as

Zk=E(k)={eRn:Heehe(k)},Z_k = \mathcal{E}(k) = \{ e \in \mathbb{R}^n : H_e e \leq h_e(k) \},

with fixed facet matrix HeH_e and time-varying scale vector he(k)h_e(k).

A set E(k)\mathcal{E}(k) is λ\lambda-contractive if

AclE(k)WλE(k)A_{\rm cl} \mathcal{E}(k) \oplus W \subseteq \lambda \mathcal{E}(k)

for some 0<λ<10 < \lambda < 1. Verification is performed facet-wise: maxeZk,wWHej,:(Acle+w)λhej(k),j=1,,q.\max_{e \in Z_k, w \in W} H_e^{j,:} (A_{\rm cl} e + w) \leq \lambda h_e^j(k), \quad j = 1, \ldots, q. Adaptive tube contraction and gain tuning are achieved via a linear program that jointly optimizes K(k)K(k), λ(k)\lambda(k), and he(k+1)h_e(k+1), ensuring both invariance and minimal tube width (Ghiasi et al., 24 Dec 2025).

3. Learning-Based Tube Tightening for State/Input-Dependent Uncertainty

Real-world applications often encounter disturbances whose bounds depend on the current state and input. This regime invalidates classical invariant set iterations due to circular dependency: verifying invariance for a given tube requires evaluating all disturbances supported on the tube itself. A learning-based solution is provided in (Ramadan et al., 16 Jan 2026), where Gaussian Process (GP) regression models the disturbance mean and variance conditional on (x,u)(x,u).

The GP-generated disturbance credible region, an ellipsoid, is outer-approximated by a polytope: Wpoly(x,u)={w:Hwwhw(x,u)}\mathcal{W}_{\text{poly}}(x, u) = \{ w : H_w w \leq h_w(x, u) \} with support functions explicitly derived from the GP posterior. As more data are gathered, posterior variance decreases, and the polytopic wrapper tightens.

A lifted, isotone fixed-point operator is defined in a graph space G\mathcal{G} of triples (x,v,w)(x,v,w), with the update map

F(Z)=A~ZB~ΔVD~W(Z)G.F(Z) = \tilde{A} Z \oplus \tilde{B} \Delta \mathcal{V} \oplus \tilde{D} \mathcal{W}(Z) \cap \mathcal{G}.

Iterative application produces a decreasing sequence converging to a disturbance-invariant fixed point, whose projection yields the RPI tube cross-section. This two-time-scale architecture separates outer learning epochs (where polytopes are frozen and tightened) from inner, contraction-based tube refinement.

As epochs advance and data accumulate, the resulting tubes nest monotonically and shrink, formalizing the "shrinking tube" property under state/input-dependent uncertainty (Ramadan et al., 16 Jan 2026).

4. System Level Disturbance Reachable Sets (SL-DRS) and FIR-Based Shrinking Tubes

System Level Disturbance Reachable Sets (SL-DRS) (Sieber et al., 2021) dispense with the need for recursive set invariance conditions by parametrizing all affine, time-varying stabilizing controllers over a finite time horizon NN through the system level parameterization (SLP). Here, the error dynamics under closed-loop feedback are represented by the system response matrices Φe,Φk\Phi_e, \Phi_k, with the disturbance impact at horizon step ii given by

Fe,i(Φe)=j=0i1ΦejW,\mathcal{F}_{e,i}(\Phi_e) = \bigoplus_{j=0}^{i-1} \Phi_e^j \mathcal{W},

yielding a sequence Fe,0Fe,1Fe,N\mathcal{F}_{e,0} \subseteq \mathcal{F}_{e,1} \subseteq \dots \subseteq \mathcal{F}_{e,N}.

Imposing a finite impulse response (FIR) constraint (ΦeN=0\Phi_e^N = 0) causes the disturbance impact to vanish beyond step NN, making the tube constant in the tail but facilitating shrinking cross-sections up to the terminal time. If the Φe\Phi_e blocks are contractive, the incremental difference sets Fe,i+1Fe,i=ΦeiW\mathcal{F}_{e,i+1} \ominus \mathcal{F}_{e,i} = \Phi_e^i \mathcal{W} also contract, guaranteeing that cross-sections decrease towards the horizon.

Both online (concurrent with nominal trajectory optimization) and offline (precomputed) SL-DRS-based tube-MPC formulations are available. Offline computation proceeds by selecting Φe,Φk\Phi_e, \Phi_k to minimize worst-case tightening, then storing cross-sections for real-time use.

5. Recursive Feasibility, Contractivity, and Stability Guarantees

Shrinking disturbance-invariant tubes fundamentally address recursive feasibility and exponential stability in MPC. The adaptive or learning-driven contraction ensures that, provided feasibility at an initial time, the subsequent tube tightening at each time step is less conservative, thus enlarging the feasible region for the nominal MPC. Stability is typically certified via a Lyapunov candidate adapted to the tube structure. For example, in (Ghiasi et al., 24 Dec 2025), a polyhedral Lyapunov function

V(e)=maxj=1,,qHej,:ehej(k)V(e) = \max_{j = 1,\ldots,q} \frac{H_e^{j,:} e}{h_e^j(k)}

shows strict geometric decay under contraction and bounded convergence under disturbance.

In the learning-based setting, tube invariance is certified at the desired coverage level (e.g., with probability at least 1α1 - \alpha), and the monotonic tube nesting ensures that hard constraints remain satisfied even as the tubes shrink across epochs (Ramadan et al., 16 Jan 2026).

6. Comparative Summary and Implementation Considerations

The following table summarizes salient features across principal approaches:

Approach Tube Adaptation Disturbance Model
Zonotopic LP co-design (Ghiasi et al., 24 Dec 2025) Adaptive, facet-scaling (λ\lambda)-contractive Zonotope, data- and constraint-driven
GP-based lifting (Ramadan et al., 16 Jan 2026) Learning-driven, monotonic, two-time-scale shrinking Polytopic, state- and input-dependent
SL-DRS + FIR (Sieber et al., 2021) Contractive finite-horizon (shrinking to tail) General convex set

Implementations must balance computational tractability (facet count, set operations), online/offline decomposition, and guarantees for real-time responsiveness. Computing or updating supporting hyperplanes, managing GP complexity, and handling high-dimensional tubes are central concerns. A plausible implication is that further scaling of learning-based tubes to nonlinear plants or colored disturbances will require additional theoretical and algorithmic advances.

7. Applications and Research Directions

Shrinking disturbance-invariant tubes have advanced the design of robust, adaptive MPC for systems where model and disturbance uncertainty decrease over time, including autonomous navigation, aerospace control, and robotics (Ghiasi et al., 24 Dec 2025, Ramadan et al., 16 Jan 2026, Sieber et al., 2021). Their structure supports the systematic exploitation of new data, prior physical knowledge, and contractive closed-loop dynamics to minimize conservatism and realize provable safety in safety-critical applications.

Future research is expected to focus on efficient GP updating, refinement of state/input-dependent uncertainty wrappers, extension to nonlinear and hybrid systems, and the integration of system level and learning-based paradigms. Rigorous scalability and real-time performance remain open technical challenges.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Shrinking Disturbance-Invariant Tubes.