Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes (2405.04714v1)

Published 7 May 2024 in cs.RO, cs.AI, and cs.LG

Abstract: Reinforcement learning provides an appealing framework for robotic control due to its ability to learn expressive policies purely through real-world interaction. However, this requires addressing real-world constraints and avoiding catastrophic failures during training, which might severely impede both learning progress and the performance of the final policy. In many robotics settings, this amounts to avoiding certain "unsafe" states. The high-speed off-road driving task represents a particularly challenging instantiation of this problem: a high-return policy should drive as aggressively and as quickly as possible, which often requires getting close to the edge of the set of "safe" states, and therefore places a particular burden on the method to avoid frequent failures. To both learn highly performant policies and avoid excessive failures, we propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum. Furthermore, we show that our risk-sensitive objective automatically avoids out-of-distribution states when equipped with an estimator for epistemic uncertainty. We implement our algorithm on a small-scale rally car and show that it is capable of learning high-speed policies for a real-world off-road driving task. We show that our method greatly reduces the number of safety violations during the training process, and actually leads to higher-performance policies in both driving and non-driving simulation environments with similar challenges.

Reinforcement Learning for Safer High-Speed Driving: An Overview of RACER

Introduction

High-speed off-road driving with reinforcement learning (RL) poses unique challenges. The thrill of swiftly navigating uneven terrains comes with the risk of crashes. Here’s where RACER steps in, a framework that combines risk-sensitive control and an adaptive action space curriculum to learn high-speed driving policies efficiently and safely.

Core Components of RACER

Let’s break down the core components of RACER and how they contribute to its effectiveness:

Risk-Sensitive Actor-Critic Objective

Traditional RL methods optimize expected returns, which can be risky when training directly in real-world environments. RACER, however, leverages Conditional Value at Risk (CVaR) to prioritize safety:

  • CVaR: Instead of only focusing on the expected returns, CVaR allows the agent to consider the worst-case scenarios. This ensures that the policy is conservative in uncertain conditions, reducing the likelihood of catastrophic failures during training.

Distributional Critics

RACER’s critic models the full return distribution, addressing both aleatoric (stochasticity in the environment) and epistemic (uncertainty due to lack of data) uncertainties:

  • Ensembled Critics: Multiple neural networks (ensembles) independently predict the return distribution. When these critics disagree, it signals epistemic uncertainty.
  • Explicit Entropy Maximization: This approach maximizes the uncertainty for out-of-distribution actions, making the agent cautious about untrained scenarios.

Adaptive Action Limits

RACER uses adaptive action limits that start with cautious actions and gradually expand as the agent becomes more confident:

  • Soft-Clip Mechanism: Actions are initially restricted to a safe subset. Over time, as the critics grow more confident about the safety of these actions, the limits are expanded.
  • This adaptive mechanism ensures cautious exploration and fewer risky actions during early training stages, progressively increasing performance.

Strong Numerical Results

RACER showcases impressive numerical results:

  • In real-world tests with a tenth-scale autonomous vehicle, RACER achieved speeds over 10% higher while reducing training failures by more than half.
  • Simulation studies showed similar trends, with RACER outperforming traditional methods like SAC (Soft Actor-Critic) and even other risk-sensitive variants in terms of final policy performance and reduced training failures.

Practical and Theoretical Implications

Practical Implications

  • Real-World Safety: By reducing the number of failures during training, RACER makes RL more viable for real-world applications, especially in safety-critical domains like autonomous driving.
  • Performance and Efficiency: RACER's ability to learn high-speed, high-performance policies with fewer setbacks means more efficient training and less wear and tear on physical robots.

Theoretical Implications

  • Handling Epistemic Uncertainty: RACER demonstrates a novel approach to incorporating epistemic uncertainty into RL, providing a framework that can be extended to other domains where safety during training is critical.
  • Adaptive Risk Sensitivity: The combination of CVaR with adaptive action limits shows that risk-sensitive objectives can be pragmatically integrated into robotic control, leading to safer and more robust policies.

Future Developments

The promising results of RACER open avenues for further research and improvements:

  • Extending RACER to Other Domains: Applying RACER to other high-risk tasks, like aerial drones or underwater robots, could yield insights into generalizing this approach.
  • Improving Adaptive Mechanisms: Refining how action limits are adjusted could lead to even safer and more efficient training pipelines.
  • Hybrid Models: Combining model-free and model-based approaches using RACER’s framework might balance exploration and safety even better.

Ultimately, RACER represents a significant step forward in safe reinforcement learning, providing a blueprint for future research in making RL robust and practical for real-world, high-risk applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Constrained policy optimization. In International conference on machine learning, pages 22–31. PMLR, 2017.
  2. A comparison of var and cvar constraints on portfolio selection with the mean-variance model. Management science, 50(9):1261–1273, 2004.
  3. Eitan Altman. Constrained Markov Decision Processes, volume 7. CRC Press, 1999.
  4. Coherent measures of risk. Mathematical finance, 9(3):203–228, 1999.
  5. Constrained policy optimization via bayesian world models. arXiv preprint arXiv:2201.09802, 2022.
  6. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018.
  7. A distributional perspective on reinforcement learning. In International conference on machine learning, pages 449–458. PMLR, 2017.
  8. Conservative safety critics for exploration. arXiv preprint arXiv:2010.14497, 2020.
  9. Time consistent dynamic risk measures. Mathematical Methods of Operations Research, 63:169–186, 2006.
  10. Marina Bruce. Anatomy of a rollover, 2018. URL https://outdooruae.com/outdoor-activity/off-road/anatomy-of-a-rollover/.
  11. High-speed autonomous drifting with deep reinforcement learning. IEEE Robotics and Automation Letters, 5(2):1247–1254, apr 2020. doi: 10.1109/lra.2020.2967299. URL https://doi.org/10.1109%2Flra.2020.2967299.
  12. Risk-sensitive safety analysis using conditional value-at-risk. IEEE Transactions on Automatic Control, 67(12):6521–6536, 2021.
  13. Randomized ensembled double q-learning: Learning fast without a model. arXiv preprint arXiv:2101.05982, 2021.
  14. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3387–3395, 2019.
  15. Reinforcement learning for safety-critical control under model uncertainty, using control lyapunov functions and control barrier functions. arXiv preprint arXiv:2004.07584, 2020.
  16. Algorithms for cvar optimization in mdps. Advances in neural information processing systems, 27, 2014.
  17. Risk-constrained reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research, 18(1):6070–6120, 2017.
  18. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. Advances in neural information processing systems, 31, 2018.
  19. Distributional reinforcement learning with quantile regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  20. Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757, 2018.
  21. Deep drifting: Autonomous drifting of arbitrary trajectories using deep reinforcement learning. In 2022 International Conference on Robotics and Automation (ICRA), pages 7753–7759, 2022. doi: 10.1109/ICRA46639.2022.9812249.
  22. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In Deep Reinforcement Learning Workshop NeurIPS 2022, 2022.
  23. Ensemble deep learning: A review. Engineering Applications of Artificial Intelligence, 115:105151, 2022.
  24. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
  25. Opening new dimensions: Vehicle motion planning and control using brakes while drifting. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 560–565, 2020. doi: 10.1109/IV47402.2020.9304728.
  26. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
  27. Risk-aware motion planning and control using cvar-constrained optimization. IEEE Robotics and Automation letters, 4(4):3924–3931, 2019.
  28. Rainbow: Combining improvements in deep reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  29. Model predictive control of vehicle roll-over with experimental verification. Control Engineering Practice, 77:95–108, 2018.
  30. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020.
  31. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
  32. Rollover-free path planning for off-road autonomous driving. Electronics, 8(6):614, 2019.
  33. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  34. Parametric return density estimation for reinforcement learning. arXiv preprint arXiv:1203.3497, 2012.
  35. F1tenth: An open-source evaluation environment for continuous control and reinforcement learning. In Hugo Jair Escalante and Raia Hadsell, editors, Proceedings of the NeurIPS 2019 Competition and Demonstration Track, volume 123 of Proceedings of Machine Learning Research, pages 77–89. PMLR, 08–14 Dec 2020. URL https://proceedings.mlr.press/v123/o-kelly20a.html.
  36. Ken Perlin. An image synthesizer. ACM Siggraph Computer Graphics, 19(3):287–296, 1985.
  37. Georg Ch Pflug. Some remarks on the value-at-risk and the conditional value-at-risk. Probabilistic constrained optimization: Methodology and applications, pages 272–281, 2000.
  38. LA Prashanth. Policy gradients for cvar-constrained mdps. In International Conference on Algorithmic Learning Theory, pages 155–169. Springer, 2014.
  39. Rahul Rahaman et al. Uncertainty quantification and deep ensembles. Advances in Neural Information Processing Systems, 34:20063–20075, 2021.
  40. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  41. Bigger, better, faster: Human-level atari with human-level efficiency. In International Conference on Machine Learning, pages 30365–30380. PMLR, 2023.
  42. Autonomous reinforcement learning: Formalism and benchmarking. arXiv preprint arXiv:2112.09605, 2021.
  43. William T. Shaw. Risk, var, cvar and their associated portfolio optimizations when asset returns have a multivariate student t distribution, 2011.
  44. Solving stabilize-avoid optimal control via epigraph form and deep reinforcement learning, 2023.
  45. Learning to be safe: Deep rl with a safety critic. arXiv preprint arXiv:2010.14603, 2020.
  46. Fastrlap: A system for learning high-speed driving via deep rl and autonomous practicing, 2023.
  47. Responsive safety in reinforcement learning by pid lagrangian methods, 2020a.
  48. Scaling up robust mdps by reinforcement learning. arXiv preprint arXiv:1306.6189, 2013.
  49. Worst cases policy gradients. arXiv preprint arXiv:1911.03618, 2019.
  50. Reward constrained policy optimization. arXiv preprint arXiv:1805.11074, 2018.
  51. Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pages 1–7. IEEE, 2020.
  52. P. Whittle. Risk-sensitive linear/quadratic/gaussian control. Advances in Applied Probability, 13(4):764–777, 1981. ISSN 00018678. URL http://www.jstor.org/stable/1426972.
  53. Information-theoretic model predictive control: Theory and applications to autonomous driving. IEEE Transactions on Robotics, 34(6):1603–1622, 2018. doi: 10.1109/TRO.2018.2865891.
  54. Ensemble-based out-of-distribution detection. Electronics, 10(5), 2021a. ISSN 2079-9292. doi: 10.3390/electronics10050567. URL https://www.mdpi.com/2079-9292/10/5/567.
  55. Wcsac: Worst-case soft actor critic for safety-constrained reinforcement learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12):10639–10646, May 2021b. doi: 10.1609/aaai.v35i12.17272. URL https://ojs.aaai.org/index.php/AAAI/article/view/17272.
  56. Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In 2020 IEEE symposium series on computational intelligence (SSCI), pages 737–744. IEEE, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Kyle Stachowicz (12 papers)
  2. Sergey Levine (531 papers)
Citations (4)