Towards explainable data-driven predictive control with regularizations (2503.21952v1)
Abstract: Data-driven predictive control (DPC), using linear combinations of recorded trajectory data, has recently emerged as a popular alternative to traditional model predictive control (MPC). Without an explicitly enforced prediction model, the effects of commonly used regularization terms (and the resulting predictions) can be opaque. This opacity may lead to practical challenges, such as reliance on empirical tuning of regularization parameters based on closed-loop performance, and potentially misleading heuristic interpretations of norm-based regularizations. However, by examining the structure of the underlying optimal control problem (OCP), more precise and insightful interpretations of regularization effects can be derived. In this paper, we demonstrate how to analyze the predictive behavior of DPC through implicit predictors and the trajectory-specific effects of quadratic regularization. We further extend these results to cover typical DPC modifications, including DPC for affine systems, offset regularizations, slack variables, and terminal constraints. Additionally, we provide a simple but general result on (recursive) feasibility in DPC. This work aims to enhance the explainability and reliability of DPC by providing a deeper understanding of these regularization mechanisms.