Papers
Topics
Authors
Recent
Search
2000 character limit reached

Regularized DeePC

Updated 21 January 2026
  • Regularized DeePC is a data-driven predictive control method augmented with norm penalties, slack variables, and distributional constraints.
  • It enhances robustness by mitigating noise, uncertainty, and nonlinearity through structured penalties and innovation-based regularization.
  • Recent developments integrate Regularized DeePC with robust MPC and stochastic filtering, offering formal performance guarantees in dynamic environments.

Regularized Data-Enabled Predictive Control (DeePC) augments the foundational DeePC framework for direct data-driven predictive control with explicit regularization mechanisms. These methods enforce statistical, structural, and robustness constraints through penalty terms in the underlying optimization, enhancing reliability against noise, uncertainty, nonlinearity, and data limitations. Recent developments unify regularized DeePC with stochastic filtering, robust MPC, and subspace predictive control, with rigorous connections to convex relaxations, distributional robustness, and innovation-based constraints.

1. Foundations of Regularized DeePC

DeePC constructs optimal control policies directly from input-output measurements by leveraging Willems' Fundamental Lemma, which guarantees that all trajectories of an unknown controllable LTI system can be expressed as linear combinations of columns of data-arranged Hankel matrices. In the nominal case, DeePC solves for a weight vector gg subject to equalities matching the system's initial trajectory window and the predicted future, typically with a quadratic tracking or performance cost.

In realistic scenarios, however, measurement noise, unmodeled dynamics, and nonlinearities render strict enforcement of the fundamental-lemma constraints infeasible or unsafe. Regularized DeePC introduces additional terms—norm penalties, slack variables, distributional robustness constraints, or structural soft constraints—to the objective function, biasing the optimization towards robust, interpretable, or well-posed solutions. Regularized DeePC encompasses methods such as adding 1\ell_1 or 2\ell_2 penalties on gg, Mahalanobis-distance regularization to limit distributional shift, and projection-based penalties to enforce structural relationships (Coulson et al., 2018, Coulson et al., 2019, Ramadan et al., 28 Jan 2025, Shang et al., 2023, Liu et al., 16 Dec 2025).

2. Regularization Paradigms and Theoretical Justification

2.1 1\ell_1 and 2\ell_2 Penalties

The most widely adopted regularization in DeePC uses 1\ell_1 (Lasso-type) or 2\ell_2 (Tikhonov-type) penalties on the trajectory-generating variable gg:

ming,u,y,σyk=0N1ykrt+kQ2+ukR2+λggp+λyσy1\min_{g,u,y,\sigma_y} \sum_{k=0}^{N-1}\|y_k - r_{t+k}\|_Q^2 + \|u_k\|_R^2 + \lambda_g\|g\|_p + \lambda_y\|\sigma_y\|_1

where p=1p=1 or $2$, and σy\sigma_y slackens only the past-output consistency. The 1\ell_1-norm promotes sparsity (selection of few data trajectories), empirically improving robustness to outliers and noise; 2\ell_2 regularization smoothens gg, discouraging large coefficients and minimizing overfitting (Coulson et al., 2018, Huang et al., 2021).

2.2 Distributionally Robust and Innovation-based Regularization

Recent work demonstrates that DeePC with 1\ell_1 or 2\ell_2 regularization is equivalent to robustification against Wasserstein-ball uncertainty in the training data, yielding a convex distributionally robust optimization (DRO) program with formal out-of-sample performance certificates (Coulson et al., 2019, Huang et al., 2021). Mechanistically, such regularizers correspond to box or ellipsoidal uncertainty sets on the Hankel operator, respectively.

In the stochastic LTI setting, the optimal gg should lie in the null space of the innovation Hankel matrix EfE_f, ensuring predictions are consistent with the steady-state Kalman filter. Regularized DeePC achieves this by penalizing the energy of the innovation component Efg2\|E_fg\|^2, with hard constraints Efg=0E_fg=0 recovering the Kalman predictor as data sufficiency increases. Weighted penalties can shape suppression of specific innovation directions (Liu et al., 16 Dec 2025).

2.3 Projection and Structural Regularization

Where the column space of the data matrix or row-space structure is important, penalties such as (IΠ)g22\|(I-\Pi)g\|_2^2 (with Π\Pi projecting onto the row space of selected Hankel blocks) enforce soft row-space or subspace constraints. Variants frequently arise from convex relaxations of bi-level DeePC formulations, where hard identification constraints (rank, row-space, or Hankel structure) are replaced by penalty terms. For instance, SVD-compressed DeePC achieves efficient dimensionality reduction while penalizing deviations from the fundamental system subspace, and nuclear-norm penalties relax low-rank identification (Shang et al., 2023, Jong et al., 16 Dec 2025).

2.4 Mahalanobis-Distance (Distributional Shift) Regularization

To mitigate harmful extrapolation in nonlinear systems, Mahalanobis-distance penalties quantify and penalize departure from the empirical distribution of observed input-output blocks. This strategy limits closed-loop exploration to the (estimated) support of the data, thereby guarding against instability or activation of unmodeled nonlinearities. Both truncated and un-truncated quadratics can be used, tuned to desired confidence via a χ2\chi^2 threshold (Ramadan et al., 28 Jan 2025).

2.5 Basis Function and Nonlinear Regularization

For nonlinear systems, DeePC is lifted via general basis functions or kernel regression. Regularization is essential for consistency with identified multi-step predictors. Dynamic regularization (penalizing the deviation of gg from its pseudoinverse-mapped counterpart) and projector-based penalties such as λ(IΦ+Φ)g2\lambda\,\|(I - \Phi^+\Phi)g\|^2 enforce equivalence to subspace predictive control (SPC). SVD-based and LASSO-based basis selection further reduce computational and data requirements (Lazar, 2023, Jong et al., 16 Dec 2025).

3. Algorithmic Formulation and Computational Aspects

Regularized DeePC is typically formulated as a single-level convex quadratic program, or, for 1\ell_1 variants, a conic program or LP. The general structural form is:

Component Mathematical Formulation Interpretation
Standard DeePC constraints Hg=vHg = v Data-driven trajectory prediction
Control/tracking cost yrQ2+uR2\|y - r\|_Q^2 + \|u\|_R^2 Usual MPC/MPC-like objective
Regularization on gg λggp\lambda_g \|g\|_p; λ(IΠ)g2\lambda\|(I-\Pi)g\|^2; λEfg2\lambda\|E_fg\|^2 Sparsity, subspace, innovation null, etc.
Slack penalties λyσy1\lambda_y\|\sigma_y\|_1 or λρρ1\lambda_\rho\|\rho\|_1 Robustifies to noise/infeasibility in past outputs
Distribution shift penalties γkFH(Ψk)\gamma \sum_k F_\mathcal{H}(\Psi_k) Robustifies against unseen input-output distributions

In the nonlinear or kernel/Basis function setting, projection-based regularization is imposed upon the lifted data, and SVD-based reduction can lower online decision dimension from O(T)O(T) to O(L+Np)O(L+Np), where LL is the reduced basis size (Jong et al., 16 Dec 2025, Lazar, 2023). Group LASSO enables data-driven feature selection in high-dimensional lifted spaces.

Key algorithmic steps typically include:

  • Offline construction and reduction of Hankel/basis matrices (possibly via SVD, group LASSO),
  • Precomputation of projections/pseudoinverses,
  • At each control step: updating initial windows, solving the regularized QP, and applying the first predicted input.

4. Statistical and Robustness Guarantees

Regularized DeePC admits theoretical support for performance and robustness:

  • Distributionally robust variants guarantee, with high probability, that the closed-loop cost under the true (unknown) system will not exceed the value of the regularized program, provided appropriate Wasserstein-ball radii or regularization parameters are chosen as a function of data size (Coulson et al., 2019).
  • For the innovation-based regularization, as λ\lambda\to\infty, DeePC converges to the multi-step Kalman predictor in the mean-square sense under Gaussian noise, and equivalently to SPC under generic noise as long as the innovation null-space is enforced (Liu et al., 16 Dec 2025, Jong et al., 16 Dec 2025).
  • Structural penalties (e.g., row-space/projector-based) can be calibrated to guarantee exact equivalence with subspace predictive control provided the penalty exceeds the stage cost Lipschitz constant and the relevant data matrices are full rank (Jong et al., 16 Dec 2025, Shang et al., 2023).
  • For Mahalanobis penalties, distributional shift is deterministically limited to the empirical support, thereby enforcing a statistically meaningful form of constraint satisfaction in closed-loop (Ramadan et al., 28 Jan 2025).
  • 1\ell_1-based regularization has limited explainability: even with block-structured Hankels or data grouping, Lasso regularization alone cannot induce selection of locally consistent operating regimes, limiting controller interpretability in nonlinear regimes (Giacomelli et al., 24 Mar 2025).

5. Interpretations and Connections to Other Methods

Regularized DeePC unifies concepts and techniques spanning stochastic filtering, robust optimization, subspace identification, and safe learning:

  • The 2\ell_2 penalty is structurally analogous to robust least-squares with ellipsoidal uncertainty, while 1\ell_1 regularization corresponds to worst-case design under box uncertainties (Huang et al., 2021).
  • Projection- or innovation-based penalties realize "soft" imposition of structurally critical constraints (such as the Kalman filter innovation null-space), establishing DeePC as a data-driven generalization of stochastic optimal prediction (Liu et al., 16 Dec 2025).
  • ARX/IV-based DeePC and projection-based regularization can be viewed as "hard" or "soft" impositions of the same subspace restrictions, varying only in the relaxation parameter.
  • Mahalanobis and distributional penalties have analogues in safe reinforcement learning and conservative model predictive control, where constraint satisfaction in unseen operating regimes is paramount (Ramadan et al., 28 Jan 2025).
  • In nonlinear and kernelized settings, regularized DeePC with basis function lifting bridges behavioral and subspace predictive control, with sparsity (LASSO), dimension reduction (SVD), and dynamic regularization all contributing to tractable, scalable synthesis (Lazar, 2023, Jong et al., 16 Dec 2025).

6. Practical Recommendations, Limitations, and Empirical Findings

Empirical studies across aerial robotics, power electronics, classic nonlinear benchmarks, and synthetic LTI/noisy systems consistently validate the following:

  • Proper tuning of regularization parameters is crucial; cost curves exhibit a U-shape with respect to penalty weight, with optimal performance at intermediate values. Defaulting to unregularized (λ=0\lambda=0) or excessively large λ\lambda leads to underfitting or infeasibility.
  • Slack penalties (σy\sigma_y) must be large enough to only relax output-data consistency when genuinely infeasible; low values may erase essential dynamics, while high values maintain feasible solutions in noisy data (Coulson et al., 2018).
  • LASSO, group LASSO, or sparsity-enforcing constraints improve tractability in high-dimensional or nonlinear settings but do not guarantee interpretable or locally valid control solutions—mixing of regimes is the norm unless group structure is explicitly regularized (Giacomelli et al., 24 Mar 2025).
  • Mahalanobis-distance regularization is effective in confining closed-loop trajectories to observationally safe zones, substantially reducing the incidence of catastrophic failures or divergence in systems with latent nonlinearities (Ramadan et al., 28 Jan 2025).
  • SVD-based or ridge regression-based reductions are crucial for online scalability, especially when the dataset is orders of magnitude larger than problem horizon or basis dimension (Jong et al., 16 Dec 2025, Lazar, 2023).

Recent research aims to further generalize regularized DeePC to:

  • Nonlinear settings via basis function expansions and kernel methods, with scalable solvers for lifted or kernel-induced high dimensions (Lazar, 2023, Jong et al., 16 Dec 2025).
  • Unified frameworks that jointly optimize basis selection (via LASSO or group LASSO), structural rank, and subspace consistency, enhancing theoretical guarantees for consistency, safety, and computational tractability.
  • Bi-level and iteratively relaxed formulations yielding convex single-level programs with tailored penalties that recover indirect subspace MPC or classical SPC in the large-penalty regime (Shang et al., 2023).
  • Quantitatively calibrated robust DeePC with region validation, credible intervals, and stochastic performance envelopes.

These developments continue to clarify the theoretical underpinnings, computational scalability, and empirical reliability of regularized DeePC, consolidating its role as a central paradigm in model-free, data-driven predictive control.


Key References:

  • (Coulson et al., 2018): "Data-Enabled Predictive Control: In the Shallows of the DeePC"
  • (Coulson et al., 2019): "Regularized and Distributionally Robust Data-Enabled Predictive Control"
  • (Huang et al., 2021): "Robust Data-Enabled Predictive Control: Tractable Formulations and Performance Guarantees"
  • (Lazar, 2023): "Basis functions nonlinear data-enabled predictive control: Consistent and computationally efficient formulations"
  • (Shang et al., 2023): "Convex Approximations for a Bi-level Formulation of Data-Enabled Predictive Control"
  • (Ramadan et al., 28 Jan 2025): "Floodgates up to contain the DeePC and limit extrapolation"
  • (Giacomelli et al., 24 Mar 2025): "Insights into the explainability of Lasso-based DeePC for nonlinear systems"
  • (Liu et al., 16 Dec 2025): "The Innovation Null Space of the Kalman Predictor: A Stochastic Perspective for DeePC"
  • (Jong et al., 16 Dec 2025): "Scalable Nonlinear DeePC: Bridging Direct and Indirect Methods and Basis Reduction"

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Regularized DeePC.