Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conservative Safety Critics for Exploration (2010.14497v2)

Published 27 Oct 2020 in cs.LG, cs.AI, cs.RO, and stat.ML

Abstract: Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning. In this paper, we target the problem of safe exploration in RL by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration. We theoretically characterize the tradeoff between safety and policy improvement, show that the safety constraints are likely to be satisfied with high probability during training, derive provable convergence guarantees for our approach, which is no worse asymptotically than standard RL, and demonstrate the efficacy of the proposed approach on a suite of challenging navigation, manipulation, and locomotion tasks. Empirically, we show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates during training than prior methods. Videos are at this url https://sites.google.com/view/conservative-safety-critics/home

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Homanga Bharadhwaj (36 papers)
  2. Aviral Kumar (74 papers)
  3. Nicholas Rhinehart (24 papers)
  4. Sergey Levine (531 papers)
  5. Florian Shkurti (52 papers)
  6. Animesh Garg (129 papers)
Citations (126)

Summary

We haven't generated a summary for this paper yet.