Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

TorchDriveEnv: A Reinforcement Learning Benchmark for Autonomous Driving with Reactive, Realistic, and Diverse Non-Playable Characters (2405.04491v1)

Published 7 May 2024 in cs.AI, cs.LG, cs.MA, and cs.RO

Abstract: The training, testing, and deployment, of autonomous vehicles requires realistic and efficient simulators. Moreover, because of the high variability between different problems presented in different autonomous systems, these simulators need to be easy to use, and easy to modify. To address these problems we introduce TorchDriveSim and its benchmark extension TorchDriveEnv. TorchDriveEnv is a lightweight reinforcement learning benchmark programmed entirely in Python, which can be modified to test a number of different factors in learned vehicle behavior, including the effect of varying kinematic models, agent types, and traffic control patterns. Most importantly unlike many replay based simulation approaches, TorchDriveEnv is fully integrated with a state of the art behavioral simulation API. This allows users to train and evaluate driving models alongside data driven Non-Playable Characters (NPC) whose initializations and driving behavior are reactive, realistic, and diverse. We illustrate the efficiency and simplicity of TorchDriveEnv by evaluating common reinforcement learning baselines in both training and validation environments. Our experiments show that TorchDriveEnv is easy to use, but difficult to solve.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
  2. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
  3. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017.
  4. Hans-Peter Schöner. Simulation in development and testing of autonomous vehicles. In 18. Internationales Stuttgarter Symposium: Automobil-und Motorentechnik, pages 1083–1095. Springer, 2018.
  5. MMDetection3D Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3D object detection, 2020.
  6. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4340–4349, 2016.
  7. gradsim: Differentiable simulation for system identification and visuomotor control. In International Conference on Learning Representations, 2021.
  8. OpenAI Gym. arXiv:1606.01540 [cs], June 2016.
  9. Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021.
  10. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23–30. IEEE, 2017.
  11. Simulation-based reinforcement learning for real-world autonomous driving. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 6411–6418. IEEE, 2020.
  12. Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. arXiv preprint arXiv:1910.03088, 2019.
  13. Rule-based optimal control for autonomous driving. In Proceedings of the ACM/IEEE 12th International Conference on Cyber-Physical Systems, pages 143–154, 2021.
  14. Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world. arXiv preprint arXiv:2206.09889, 2022.
  15. Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Transactions on Intelligent Transportation Systems, 22(7):4316–4336, 2020.
  16. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. In Proceedings of Robotics: Science and Systems, FreiburgimBreisgau, Germany, June 2019.
  17. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018.
  18. Imitation is not enough: Robustifying imitation with reinforcement learning for challenging driving scenarios. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7553–7560. IEEE, 2023.
  19. Decision making for autonomous driving via multimodal transformer and deep reinforcement learning. In 2022 IEEE International Conference on Real-time Computing and Robotics (RCAR), pages 481–486. IEEE, 2022.
  20. Development and testing of an image transformer for explainable autonomous driving systems. Journal of Intelligent and Connected Vehicles, 5(3):235–249, 2022.
  21. Symphony: Learning realistic and diverse agents for autonomous driving simulation. In 2022 International Conference on Robotics and Automation (ICRA), page 2445–2451. IEEE Press, 2022.
  22. Learning by cheating. In Conference on Robot Learning, pages 66–75. PMLR, 2020.
  23. Wayformer: Motion forecasting via simple & efficient attention networks. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 2980–2987. IEEE, 2023.
  24. Safetynet: Safe planning for real-world self-driving vehicles using machine-learned policies. In 2022 International Conference on Robotics and Automation (ICRA), pages 897–904. IEEE, 2022.
  25. dm_control: Software and tasks for continuous control. Software Impacts, 6:100022, 2020.
  26. Reinforcement learning: An introduction. MIT press, 2018.
  27. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning (CoRL), 2019.
  28. A survey of meta-reinforcement learning. arXiv preprint arXiv:2301.08028, 2023.
  29. A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Comput. Surv., 55(13s), jul 2023.
  30. Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning, pages 2048–2056. PMLR, 2020.
  31. Summit: A simulator for urban driving in massive mixed traffic. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 4023–4029. IEEE, 2020.
  32. Praveen Palanisamy. Multi-agent connected autonomous driving using deep reinforcement learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–7. IEEE, 2020.
  33. Craig Quiter. Deepdrive zero, 2020.
  34. Smarts: An open-source scalable multi-agent rl training school for autonomous driving. In Jens Kober, Fabio Ramos, and Claire Tomlin, editors, Proceedings of the 2020 Conference on Robot Learning, volume 155 of Proceedings of Machine Learning Research, pages 264–285. PMLR, 16–18 Nov 2021.
  35. Madras: Multi agent driving simulator. Journal of Artificial Intelligence Research, 70:1517–1555, 2021.
  36. Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning. IEEE transactions on pattern analysis and machine intelligence, 2022.
  37. InterSim: Interactive traffic simulation via explicit relation modeling. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022.
  38. Bits: Bi-level imitation for traffic simulation. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 2929–2936. IEEE, 2023.
  39. Waymax: An accelerated, data-driven simulator for large-scale autonomous driving research. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
  40. The Kinematic Bicycle Model: a Consistent Model for Planning Feasible Trajectories for Autonomous Vehicles? In IEEE Intelligent Vehicles Symposium (IV) , Los Angeles, United States, June 2017.
  41. Steven M LaValle. Planning algorithms. Cambridge university press, 2006.
  42. Imagining the road ahead: Multi-agent trajectory prediction via differentiable simulation. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 720–725, 2021.
  43. Conditional permutation invariant flows. Transactions on Machine Learning Research, 2023.
  44. Don’t be so negative! score-based generative modeling with oracle-assisted guidance, 2023.
  45. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937. PMLR, 2016.
  46. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  47. Addressing function approximation error in actor-critic methods. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1587–1596. PMLR, 10–15 Jul 2018.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets