Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Statistical Proper Orthogonal Decomposition for model reduction in feedback control (2311.16332v1)

Published 27 Nov 2023 in math.OC, cs.NA, and math.NA

Abstract: Feedback control synthesis for nonlinear, parameter-dependent fluid flow control problems is considered. The optimal feedback law requires the solution of the Hamilton-Jacobi-BeLLMan (HJB) PDE suffering the curse of dimensionality. This is mitigated by Model Order Reduction (MOR) techniques, where the system is projected onto a lower-dimensional subspace, over which the feedback synthesis becomes feasible. However, existing MOR methods assume at least one relaxation of generality, that is, the system should be linear, or stable, or deterministic. We propose a MOR method called Statistical POD (SPOD), which is inspired by the Proper Orthogonal Decomposition (POD), but extends to more general systems. Random samples of the original dynamical system are drawn, treating time and initial condition as random variables similarly to possible parameters in the model, and employing a stabilizing closed-loop control. The reduced subspace is chosen to minimize the empirical risk, which is shown to estimate the expected risk of the MOR solution with respect to the distribution of all possible outcomes of the controlled system. This reduced model is then used to compute a surrogate of the feedback control function in the Tensor Train (TT) format that is computationally fast to evaluate online. Using unstable Burgers' and Navier-Stokes equations, it is shown that the SPOD control is more accurate than Linear Quadratic Regulator or optimal control derived from a model reduced onto the standard POD basis, and faster than the direct optimal control of the original system.

Citations (1)

Summary

We haven't generated a summary for this paper yet.