Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safe Reinforcement Learning Using Robust Control Barrier Functions (2110.05415v2)

Published 11 Oct 2021 in eess.SY, cs.AI, cs.LG, cs.RO, and cs.SY

Abstract: Reinforcement Learning (RL) has been shown to be effective in many scenarios. However, it typically requires the exploration of a sufficiently large number of state-action pairs, some of which may be unsafe. Consequently, its application to safety-critical systems remains a challenge. An increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions. In turn, a difficulty for such frameworks is how to effectively couple RL with the safety layer to improve the learning performance. In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework. Moreover, we also propose an approach to modularly learn the underlying reward-driven task, independent of safety constraints. We demonstrate that this approach both ensures safety and effectively guides exploration during training in a range of experiments, including zero-shot transfer when the reward is learned in a modular way.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yousef Emam (7 papers)
  2. Gennaro Notomista (28 papers)
  3. Paul Glotfelter (7 papers)
  4. Zsolt Kira (110 papers)
  5. Magnus Egerstedt (78 papers)
Citations (30)

Summary

We haven't generated a summary for this paper yet.