Papers
Topics
Authors
Recent
2000 character limit reached

Structured learning of safety guarantees for the control of uncertain dynamical systems

Published 6 Dec 2021 in eess.SY and cs.SY | (2112.03347v2)

Abstract: Approaches to keeping a dynamical system within state constraints typically rely on a model-based safety condition to limit the control signals. In the face of significant modeling uncertainty, the system can suffer from important performance penalties due to the safety condition becoming overly conservative. Machine learning can be employed to reduce the uncertainty around the system dynamics, and allow for higher performance. In this article, we propose the safe uncertainty learning principle, and argue that the learning must be properly structured to preserve safety guarantees. For instance, robust safety conditions are necessary, and they must be initialized with conservative uncertainty bounds prior to learning. Also, the uncertainty bounds should only be tightened if the collected data sufficiently capture the future system behavior. To support the principle, two example problems are solved with control barrier functions: a lane-change controller for an autonomous vehicle, and an adaptive cruise controller. This work offers a way to evaluate if machine learning preserves safety guarantees during the control of uncertain dynamical systems. It also highlights challenging aspects of learning for control.

Citations (6)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.