Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A comparison of methods to eliminate regularization weight tuning from data-enabled predictive control (2305.00807v1)

Published 1 May 2023 in eess.SY and cs.SY

Abstract: Data-enabled predictive control (DeePC) is a recently established form of Model Predictive Control (MPC), based on behavioral systems theory. While eliminating the need to explicitly identify a model, it requires an additional regularization with a corresponding weight to function well with noisy data. The tuning of this weight is non-trivial and has a significant impact on performance. In this paper, we compare three reformulations of DeePC that either eliminate the regularization, or simplify the tuning to a trivial point. A building simulation study shows a comparable performance for all three reformulations of DeePC. However, a conventional MPC with a black-box model slightly outperforms them, while solving much faster, and yielding smoother optimal trajectories. Two of the DeePC variants also show sensitivity to an unobserved biased input noise, which is not present in the conventional MPC.

Summary

We haven't generated a summary for this paper yet.