Papers
Topics
Authors
Recent
Search
2000 character limit reached

Optimization Landscape of Gradient Descent for Discrete-time Static Output Feedback

Published 27 Sep 2021 in math.OC, cs.SY, and eess.SY | (2109.13132v4)

Abstract: In this paper, we analyze the optimization landscape of gradient descent methods for static output feedback (SOF) control of discrete-time linear time-invariant systems with quadratic cost. The SOF setting can be quite common, for example, when there are unmodeled hidden states in the underlying process. We first establish several important properties of the SOF cost function, including coercivity, L-smoothness, and M-Lipschitz continuous Hessian. We then utilize these properties to show that the gradient descent is able to converge to a stationary point at a dimension-free rate. Furthermore, we prove that under some mild conditions, gradient descent converges linearly to a local minimum if the starting point is close to one. These results not only characterize the performance of gradient descent in optimizing the SOF problem, but also shed light on the efficiency of general policy gradient methods in reinforcement learning.

Citations (14)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.