Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning-Augmented Tube MPC

Updated 12 March 2026
  • The paper introduces a learning-augmented tube MPC framework that fuses data-driven uncertainty set refinement with traditional robust control strategies.
  • It employs adaptive tube sizing and online ancillary feedback gain synthesis via linear programming to reduce conservatism and improve feasibility.
  • The method ensures recursive feasibility and exponential stability by consistently integrating prior knowledge with real-time data updates.

Learning-augmented tube Model Predictive Control (MPC) integrates set-based robust control with online or data-driven learning components to adaptively shape the "tube" that contains all admissible closed-loop trajectories under model uncertainty, disturbances, or external influences. The central goal is to reduce worst-case conservatism of classical tube MPC by continuously refining model uncertainty and disturbance sets from measured data, prior knowledge, or explicit statistical/learning models, resulting in smaller tubes, less constraint tightening, and improved feasibility and performance guarantees.

1. Problem Formulation and Background

Classical tube MPC constructs a nominal (center) trajectory using a known or conservative model and maintains all possible true system trajectories within a tube—an invariant set around the nominal trajectory—by employing an ancillary feedback law. Tube tightening is determined using bounds on model uncertainty and disturbances. However, if the uncertainty or disturbance set is overly conservative, feasibility shrinks and closed-loop performance degrades.

Learning-augmented tube MPC techniques address this by actively refining and shrinking the tube using data-driven set learning, adaptive feedback gain selection, or disturbance model identification. Recent approaches go further: they parameterize disturbance/model sets as matrix zonotopes, polytopes, or ellipsoidal over-approximations, calibrated either from measured trajectories, statistical learning, or hybrid methods that fuse offline prior knowledge with online observations (Ghiasi et al., 24 Dec 2025).

2. Learning Data- and Prior-Consistent Uncertainty Sets

A central mechanism in learning-augmented tube MPC is the construction and iterative refinement of uncertainty sets for both dynamics and disturbances. The workflow proposed in "Safe Navigation with Zonotopic Tubes: An Elastic Tube-based MPC Framework" (Ghiasi et al., 24 Dec 2025) is as follows:

  • Batch data collection: Collect TT measurements of states and inputs, form data matrices U0U_0, X0X_0, X1X_1, and D0D_0.
  • Disturbance sequence set: Model the disturbance sequence W0W_0 as a multi-step zonotope MZwTM_{Z_w^T}.
  • Open-loop model set: Use the data equation X1=AX0+BU0+W0X_1 = A^\star X_0 + B^\star U_0 + W_0 to compute a data-driven matrix-zonotope for [A B][A\ B].
  • Data-consistency refinement: Impose affine constraints ensuring the realized disturbance sequence is consistent with the observed data, yielding a constrained zonotope MwM_w.
  • Prior knowledge intersection: If a prior set MpriorM_{\text{prior}} is known, intersect it with MwM_w to yield a (potentially much smaller) consistent uncertainty set MdwM_{dw}.
  • Refined model sets: Substitute MdwM_{dw} into open- and closed-loop model zonotope constructions, maintaining constraints from both data and prior knowledge.

This process enables principled fusion of offline (prior) and online (empirical) information for simultaneous identification of model and disturbance sets, improving both the modeling accuracy and reducing conservatism (Ghiasi et al., 24 Dec 2025).

3. Tube MPC with Adaptive Tube Sizing and Feedback Synthesis

The learning-augmented scheme decomposes the system state as xk=xˉk+ekx_k = \bar x_k + e_k (nominal + error), recasts the dynamics using the refined nominal matrices, and defines the error tube at each time via a polytope Ek={e:Heehe(k)}E_k = \{e: H_e e \le h_e(k)\}. The tube cross-section scaling he(k)h_e(k) is a time-varying sequence.

At each time step, a joint optimization is performed to select both the ancillary feedback gain KtK_t and the tube size he(t+1)h_e(t+1), along with a contraction factor λ(t)<1\lambda(t) < 1. This is posed as a linear program of the form:

minP,VK,ρ,λρ+σλ subject to constraints (tube contractivity, affine consistency, gain norm bounds) Phe(t)λhe(t)Hechρ(t)z(t)y PHe=He(X1Cdw)VK VKρ X0VK=I 0<λ<1,  P0\begin{aligned} & \min_{P, V_K, \rho, \lambda} \quad \rho + \sigma \lambda \ & \text{subject to constraints (tube contractivity, affine consistency, gain norm bounds)}\ & P h_e(t) \leq \lambda h_e(t) - H_e c_h - \rho \ell(t) - z(t) - y \ & P H_e = H_e (X_1 - C_{dw}) V_K \ & \| V_K \| \leq \rho \ & X_0 V_K = I \ & 0 < \lambda < 1, \; P \geq 0 \end{aligned}

The update he(t+1)=λ(t)he(t)h_e(t+1) = \lambda(t) h_e(t) guarantees that the error tube shrinks if possible, resulting in an adaptive, λ\lambda-contractive, robustly invariant tube that is automatically adjusted as the learned uncertainty sets contract (Ghiasi et al., 24 Dec 2025). The main innovation lies in co-designing the feedback gain and tube geometry online, removing the need to size the tube for the worst-case error across all operating points.

4. Recursive Feasibility and Stability Guarantees

Recursive feasibility is established via a shifted-trajectory argument: the adaptively contracted tube at t+1t+1 is a subset of the tube at tt, and the associated feedback gain ensures that the ancillary controller remains valid for the smaller tube. The constraints are always satisfied as long as feasible at t=0t = 0. The crucial Lyapunov function on the error tube,

V(e)=maxjHej,ehej,V(e) = \max_j \frac{H_e^{j,\cdot} e}{h_e^j},

decreases exponentially in the absence of disturbances, V(ek+1)λ(k)V(ek)V(e_{k+1}) \leq \lambda(k) V(e_k), and in the presence of bounded disturbances converges to a bounded neighborhood, V(ek+1)λˉV(ek)+cwV(e_{k+1}) \leq \bar \lambda V(e_k) + c_w (Ghiasi et al., 24 Dec 2025). This yields exponential stability of the closed-loop error dynamics, while the nominal MPC cost also decays, guaranteeing asymptotic stability for the nominal system.

5. Computational Framework and Practicalities

The implementation proceeds in three phases:

  • Offline: Specify initial disturbance zonotope and prior model zonotope; initialize tube size.
  • Initial data batch: Gather TT samples, build data- and prior-consistent zonotopic sets, and refine the open-loop uncertainty set.
  • Online loop: At each sampling time,

    1. Update feedback/tube via linear programming,
    2. Solve the standard nominal MPC with tightened constraints,
    3. Update data and disturbance set using new observations and prior constraints,
    4. Recompute zonotopic model and tube sets as above.

The LP and QP problems are moderate in complexity (LP in O(q2+mT+nT)O(q^2 + mT + nT) variables for qq tube facets; QP for MPC), with zonotope operations (intersection, support function evaluation) performed via matrix arithmetic and limited vertex enumeration. Empirical results suggest that a short data batch (10–50 samples) suffices to realize significant tube contraction and improved feasibility (Ghiasi et al., 24 Dec 2025).

6. Performance Gains and Comparative Analysis

Learning-augmented tube MPC—especially the elastic, data-driven zonotopic tube approach—delivers several empirically demonstrated benefits:

  • The feasible region is dramatically larger than that of fixed-gain, fixed-tube schemes.

  • Tube size contracts adaptively in data-rich regions or when the model/refined disturbance set permit, rather than being overly conservative across all state/input regions.
  • The joint updating of the tube and feedback gain increases tolerance to severe disturbances without sacrificing set-invariance or violating constraints.
  • Exponential convergence of the cost and error tube size is observed (see simulation results).
  • Marginal online data are sufficient to ensure contraction, enabling early improved performance before large datasets accumulate.

Related works using different learning paradigms—Gaussian processes for state/input-dependent tubes (Ramadan et al., 16 Jan 2026), fuzzy models for state-dependent disturbances (Surma et al., 2023), deep quantile regression for probabilistic tubes (Fan et al., 2020), or Bayesian/certification-layered methods—share a similar architecture: they parameterize and refine the uncertainty set using data, adapt the tube geometry and ancillary control accordingly, and rigorously propagate safety and recursive feasibility guarantees through the closed-loop system.

7. Connections to Robust Safe Learning and Extensions

This methodology connects directly to the broader learning-for-MPC literature, which seeks to retain robustness and theoretical guarantees while leveraging available data for improved control performance (Gros et al., 2020). The learning-augmented elastic tube MPC paradigm is compatible with a variety of learning modules—e.g., deep neural models, Gaussian processes, or hybrid Bayesian-fuzzy systems—and has been extended to nonlinear, multi-agent, adaptive, or high-dimensional settings with similar structure-preserving benefits (Compton et al., 2024, Wang et al., 4 Apr 2025).

A plausible implication is that learning-augmented tube MPC frameworks, particularly when supported by formal set-invariance, recursive feasibility, and Lyapunov-based stability results, provide a rigorous foundation for safe, adaptive, and less conservative robust control of uncertain high-dimensional systems using moderate data and scalable computation (Ghiasi et al., 24 Dec 2025).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Learning-Augmented Tube MPC.