Papers
Topics
Authors
Recent
2000 character limit reached

Learning-based synthesis of robust linear time-invariant controllers

Published 6 Dec 2021 in eess.SY and cs.SY | (2112.03345v2)

Abstract: Recent advances in learning for control allow to synthesize vehicle controllers from learned system dynamics and maintain robust stability guarantees. However, no approach is well-suited for training linear time-invariant (LTI) controllers using arbitrary learned models of the dynamics. This article introduces a method to do so. It uses a robust control framework to derive robust stability criteria. It also uses simulated policy rollouts to obtain gradients on the controller parameters, which serve to improve the closed-loop performance. By formulating the stability criteria as penalties with computable gradients, they can be used to guide the controller parameters toward robust stability during gradient descent. The approach is flexible as it does not restrict the type of learned model for the simulated rollouts. The robust control framework ensures that the controller is already robustly stabilizing when first implemented on the actual system and no data is yet collected. It also ensures that the system stays stable in the event of a shift in dynamics, given the system behavior remains within assumed uncertainty bounds. We demonstrate the approach by synthesizing a controller for simulated autonomous lane change maneuvers. This work thus presents a flexible approach to learning robustly stabilizing LTI controllers that take advantage of modern machine learning techniques.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.