Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning for Autonomous Vehicle Intersection Navigation (2310.08595v2)

Published 30 Sep 2023 in cs.RO, cs.AI, and cs.LG

Abstract: In this paper, we explore the challenges associated with navigating complex T-intersections in dense traffic scenarios for autonomous vehicles (AVs). Reinforcement learning algorithms have emerged as a promising approach to address these challenges by enabling AVs to make safe and efficient decisions in real-time. Here, we address the problem of efficiently and safely navigating T-intersections using a lower-cost, single-agent approach based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) reinforcement learning algorithm. We show that our TD3-based method, when trained and tested in the CARLA simulation platform, demonstrates stable convergence and improved safety performance in various traffic densities. Our results reveal that the proposed approach enables the AV to effectively navigate T-intersections, outperforming previous methods in terms of travel delays, collision minimization, and overall cost. This study contributes to the growing body of knowledge on reinforcement learning applications in autonomous driving and highlights the potential of single-agent, cost-effective methods for addressing more complex driving scenarios and advancing reinforcement learning algorithms in the future.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. V. Milanés, J. Pérez, E. Onieva, and C. González, “Controller for urban intersections based on wireless communications and fuzzy logic,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 1, pp. 243–248, 2009.
  2. J. Huang and H.-S. Tan, “A low-order dgps-based vehicle positioning system under urban environment,” IEEE/ASME Transactions on mechatronics, vol. 11, no. 5, pp. 567–575, 2006.
  3. B. B. Elallid, N. Benamar, A. S. Hafid, T. Rachidi, and N. Mrani, “A comprehensive survey on the application of deep and reinforcement learning approaches in autonomous driving,” Journal of King Saud University-Computer and Information Sciences, 2022.
  4. B. B. Elallid, S. E. Hamdani, N. Benamar, and N. Mrani, “Deep learning-based modeling of pedestrian perception and decision-making in refuge island for autonomous driving,” in Computational Intelligence in Recent Communication Networks, pp. 135–146, Springer, 2022.
  5. B. B. Elallid, N. Benamar, N. Mrani, and T. Rachidi, “Dqn-based reinforcement learning for vehicle control of autonomous vehicles interacting with pedestrians,” in 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), pp. 489–493, IEEE, 2022.
  6. B. B. Elallid, A. Abouaomar, N. Benamar, and A. Kobbane, “Vehicles control: Collision avoidance using federated deep reinforcement learning,” arXiv preprint arXiv:2308.02614, 2023.
  7. S. Azimi, G. Bhatia, R. Rajkumar, and P. Mudalige, “Reliable intersection protocols using vehicular networks,” in Proceedings of the ACM/IEEE 4th International Conference on Cyber-Physical Systems, pp. 1–10, 2013.
  8. L. Li and F.-Y. Wang, “Cooperative driving at blind crossings using intervehicle communication,” IEEE Transactions on Vehicular technology, vol. 55, no. 6, pp. 1712–1724, 2006.
  9. L. Riegger, M. Carlander, N. Lidander, N. Murgovski, and J. Sjöberg, “Centralized mpc for autonomous intersection crossing,” in 2016 IEEE 19th international conference on intelligent transportation systems (ITSC), pp. 1372–1377, IEEE, 2016.
  10. Y. Chen, J. Zha, and J. Wang, “An autonomous t-intersection driving strategy considering oncoming vehicles based on connected vehicle technology,” IEEE/ASME Transactions on Mechatronics, vol. 24, no. 6, pp. 2779–2790, 2019.
  11. M. S. Corporation, “Carsim.” https://www.carsim.com, 2023. Accessed: [March 15, 2023].
  12. A. I. M. Medina, N. Van de Wouw, and H. Nijmeijer, “Automation of a t-intersection using virtual platoons of cooperative autonomous vehicles,” in 2015 IEEE 18th international conference on intelligent transportation systems, pp. 1696–1701, IEEE, 2015.
  13. R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
  14. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
  15. B. B. Elallid, M. Bagaa, N. Benamar, and N. Mrani, “A reinforcement learning based approach for controlling autonomous vehicles in complex scenarios,” in 2023 International Wireless Communications and Mobile Computing (IWCMC), pp. 1358–1364, IEEE, 2023.
  16. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
  17. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  18. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning, pp. 1861–1870, PMLR, 2018.
  19. S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International conference on machine learning, pp. 1587–1596, PMLR, 2018.
  20. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “CARLA: An open urban driving simulator,” in Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16, 2017.
  21. D. Krajzewicz, J. Erdmann, M. Behrisch, and L. Bieker, “Recent development and applications of sumo-simulation of urban mobility,” International journal on advances in systems and measurements, vol. 5, no. 3&4, 2012.
  22. N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), vol. 3, pp. 2149–2154, IEEE, 2004.
  23. G. Rong, B. H. Shin, H. Tabatabaee, Q. Lu, S. Lemke, M. Možeiko, E. Boise, G. Uhm, M. Gerow, S. Mehta, E. Agafonov, T. H. Kim, E. Sterner, K. Ushiroda, M. Reyes, D. Zelenkovsky, and S. Kim, “SVL Simulator: A High Fidelity Simulator for Autonomous Driving,” arXiv e-prints, p. arXiv:2005.03778, May 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.