Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Probabilistic model predictive safety certification for learning-based control (1906.10417v2)

Published 25 Jun 2019 in eess.SY and cs.SY

Abstract: Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.

Citations (93)

Summary

We haven't generated a summary for this paper yet.