Supplemental Material For "Primal-Dual Q-Learning Framework for LQR Design" (1811.08475v1)
Abstract: Recently, reinforcement learning (RL) is receiving more and more attentions due to its successful demonstrations outperforming human performance in certain challenging tasks. In our paper `primal-dual Q-learning framework for LQR design,' we study a new optimization formulation of the linear quadratic regulator (LQR) problem via the Lagrangian duality theories in order to lay theoretical foundations of potentially effective RL algorithms. The new optimization problem includes the Q-function parameters so that it can be directly used to develop Q-learning algorithms, known to be one of the most popular RL algorithms. In the paper, we prove relations between saddle-points of the Lagrangian function and the optimal solutions of the BeLLMan equation. As an application, we propose a model-free primal-dual Q-learning algorithm to solve the LQR problem and demonstrate its validity through examples. It is meaningful to consider additional potential applications of the proposed analysis. Various SDP formulations of Problem 5 or Problem 2 of the paper can be derived, and they can be used to develop new analysis and control design approaches. For example, an SDP-based optimal control design with energy and input constraints can be derived. Another direction is algorithms for structured controller designs. These approaches are included in this supplemental material.