Linear Convergence of Data-Enabled Policy Optimization for Linear Quadratic Tracking
Abstract: Data-enabled policy optimization (DeePO) is a newly proposed method to attack the open problem of direct adaptive LQR. In this work, we extend the DeePO framework to the linear quadratic tracking (LQT) with offline data. By introducing a covariance parameterization of the LQT policy, we derive a direct data-driven formulation of the LQT problem. Then, we use gradient descent method to iteratively update the parameterized policy to find an optimal LQT policy. Moreover, by revealing the connection between DeePO and model-based policy optimization, we prove the linear convergence of the DeePO iteration. Finally, a numerical experiment is given to validate the convergence results. We hope our work paves the way to direct adaptive LQT with online closed-loop data.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.