Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning in Autonomous Car Path Planning and Control: A Survey (2404.00340v1)

Published 30 Mar 2024 in cs.RO, cs.SY, and eess.SY

Abstract: Combining data-driven applications with control systems plays a key role in recent Autonomous Car research. This thesis offers a structured review of the latest literature on Deep Reinforcement Learning (DRL) within the realm of autonomous vehicle Path Planning and Control. It collects a series of DRL methodologies and algorithms and their applications in the field, focusing notably on their roles in trajectory planning and dynamic control. In this review, we delve into the application outcomes of DRL technologies in this domain. By summarizing these literatures, we highlight potential challenges, aiming to offer insights that might aid researchers engaged in related fields.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (80)
  1. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access, 8:58443–58469, 2020.
  2. Model predictive path tracking control for automated road vehicles: A review. Annual reviews in control, 55:194–236, 2023.
  3. Lateral control for autonomous vehicles: A comparative evaluation. Annual Reviews in Control, 57:100910, 2024.
  4. Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning. Robotics and Autonomous Systems, 114:1–18, 2019.
  5. Deep reinforcement learning for intelligent transportation systems: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(1):11–32, 2020.
  6. S. Aradi. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 23(2):740–759, 2020.
  7. A review of motion planning for highway autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 21(5):1826–1848, 2019.
  8. A survey of deep learning applications to autonomous vehicle control. IEEE Transactions on Intelligent Transportation Systems, 22(2):712–733, 2020.
  9. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  10. Trust region policy optimization. In International conference on machine learning, pages 1889–1897. PMLR, 2015.
  11. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  12. Twin-delayed ddpg: A deep reinforcement learning technique to model a continuous movement of an intelligent robot agent. In Proceedings of the 3rd international conference on vision, image and signal processing, pages 1–5, 2019.
  13. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
  14. Deep q-learning from demonstrations. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  15. Pretraining deep actor-critic reinforcement learning algorithms with expert demonstrations. arXiv preprint arXiv:1801.10459, 2018.
  16. Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374, 2019.
  17. Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101, 2018.
  18. Continuous deep q-learning with model-based acceleration. In International conference on machine learning, pages 2829–2838. PMLR, 2016.
  19. A review of motion planning techniques for automated vehicles. IEEE Transactions on intelligent transportation systems, 17(4):1135–1145, 2015.
  20. Edouard Leurent. A survey of state-action representations for autonomous driving. 2018.
  21. Policy invariance under reward transformations: Theory and application to reward shaping. In Icml, volume 99, pages 278–287, 1999.
  22. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, page 1, 2004.
  23. Object detection with deep neural networks for reinforcement learning in the task of autonomous vehicles path planning at the intersection. Optical Memory and Neural Networks, 28(4):283–295, 2019.
  24. Proximal policy optimization through a deep reinforcement learning framework for multiple autonomous vehicles at a non-signalized intersection. Applied Sciences-Basel, 10(16), 2020.
  25. Multi-task safe reinforcement learning for navigating intersections in dense traffic. Journal of the Franklin Institute-Engineering and Applied Mathematics, 360(17):13737–13760, 2023.
  26. Online longitudinal trajectory planning for connected and autonomous vehicles in mixed traffic flow with deep reinforcement learning approach. Journal of Intelligent Transportation Systems, 27(3):396–410, 2023.
  27. Integrating deep reinforcement learning with optimal trajectory planner for automated driving. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020.
  28. J. Clemmons and Y.-F. Jin. Reinforcement learning-based guidance of autonomous vehicles. In 2023 24th International Symposium on Quality Electronic Design (ISQED), pages 1–6, 2023.
  29. End-to-end autonomous vehicle navigation control method guided by the dynamic window approach. In 2023 IEEE 6th International Electrical and Energy Conference (CIEEC), pages 4472–4476, 2023.
  30. Real-time metadata-driven routing optimization for electric vehicle energy consumption minimization using deep reinforcement learning and markov chain model. Electric Power Systems Research, 192, 2021.
  31. Hierarchical evasive path planning using reinforcement learning and model predictive control. IEEE Access, 8:187470–187482, 2020.
  32. Reinforcement learning based negotiation-aware motion planning of autonomous vehicles. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4532–4537, 2021.
  33. An efficient planning method based on deep reinforcement learning with hybrid actions for autonomous driving on highway. International Journal of Machine Learning and Cybernetics, 14(10):3483–3499, 2023.
  34. Explainable navigation system using fuzzy reinforcement learning. International Journal of Interactive Design and Manufacturing - IJIDeM, 14(4):1411–1428, 2020.
  35. Residual policy learning facilitates efficient model-free autonomous racing. IEEE Robotics and Automation Letters, 7(4):11625–11632, 2022.
  36. Autonomous driving at the handling limit using residual reinforcement learning. Advanced Engineering Informatics, 54, 2022.
  37. Hybrid ddpg approach for vehicle motion planning. In ICINCO: Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics, Vol 1, pages 422–429, 2019.
  38. Proving ground test of a ddpg-based vehicle trajectory planner. In 2020 European Control Conference (ECC 2020), pages 332–337, 2020.
  39. Stability analysis for autonomous vehicle navigation trained over deep deterministic policy gradient. Mathematics, 11(1), 2023.
  40. Path planning for autonomous vehicles in unknown dynamic environment based on deep reinforcement learning. Applied Sciences-Basel, 13(18), 2023.
  41. Covernav: Cover following navigation planning in unstructured outdoor environment with deep reinforcement learning. In 2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), pages 127–132, 2023.
  42. Integrated chassis control: Classification, analysis and future trends. Annual Reviews in Control, 51:172–205, 2021.
  43. Cooperative adaptive cruise control: A reinforcement learning approach. IEEE Transactions on intelligent transportation systems, 12(4):1248–1260, 2011.
  44. Self-optimizing path tracking controller for intelligent vehicles based on reinforcement learning. Symmetry-Basel, 14(1), 2022.
  45. Explainable reinforcement learning for longitudinal control. In ICAART: Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Vol 2, pages 874–881, 2021.
  46. Deep reinforcement learning based control for autonomous vehicles in carla. Multimedia Tools and Applications, 81(3):3553–3576, 2022.
  47. Reinforcement-learning-based cooperative adaptive cruise control of buses in the lincoln tunnel corridor with time-varying topology. IEEE Transactions on Intelligent Transportation Systems, 20(10):3796–3805, 2019.
  48. A safe reinforcement learning based trajectory tracker framework. IEEE Transactions on Intelligent Transportation Systems, 24(6):5765–5780, 2023.
  49. Deep reinforcement learning based tracking control of unmanned vehicle with safety guarantee. In 2022 13th Asian Control Conference (ASCC), pages 1893–1898, 2022.
  50. Path-tracking control strategy of unmanned vehicle based on ddpg algorithm. Sensors, 22(20), 2022.
  51. Lane following method based on improved ddpg algorithm. Sensors, 21(14), 2021.
  52. Safe reinforcement learning for model-reference trajectory tracking of uncertain autonomous vehicles with model-based acceleration. IEEE Transactions on Intelligent Vehicles, 8(3):2332–2344, 2023.
  53. Design of a reinforcement learning-based lane keeping planning agent for automated vehicles. Applied Sciences-Basel, 10(20), 2020.
  54. Safe, efficient, and comfortable autonomous driving based on cooperative vehicle infrastructure system. International Journal of Environmental Research and Public Health, 20(1), 2023.
  55. Model-based reinforcement learning for time-optimal velocity control. IEEE Robotics and Automation Letters, 5(4):6185–6192, 2020.
  56. Combined longitudinal and lateral control of autonomous vehicles based on reinforcement learning. In 2021 American Control Conference (ACC), pages 1929–1934, 2021.
  57. Joint optimization of sensing, decision-making and motion-controlling for autonomous vehicles: A deep reinforcement learning approach. IEEE Transactions on Vehicular Technology, 71(5):4642–4654, 2022.
  58. Reinforcement-learning-aided adaptive control for autonomous driving with combined lateral and longitudinal dynamics. In 2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS), pages 840–845, 2023.
  59. Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving. Transportation Research Part C: Emerging Technologies, 117:102662, 2020.
  60. Human-like autonomous vehicle speed control by deep reinforcement learning with double q-learning. In 2018 IEEE intelligent vehicles symposium (IV), pages 1251–1256. IEEE, 2018.
  61. Reaching the limit in autonomous racing: Optimal control versus reinforcement learning. Science Robotics, 8(82):eadg1462, 2023.
  62. W LLC. Waymo open dataset: An autonomous driving dataset, 2019.
  63. 1 year, 1000 km: The oxford robotcar dataset. The International Journal of Robotics Research, 36(1):3–15, 2017.
  64. The apolloscape open dataset for autonomous driving and its application. IEEE transactions on pattern analysis and machine intelligence, 42(10):2702–2719, 2019.
  65. Ry Rivard. Udacity project on «pause». Inside Higher Ed, 18, 2013.
  66. Self-driving car steering angle prediction based on image recognition. arXiv preprint arXiv:1912.05440, 2019.
  67. An end-to-end deep reinforcement learning model based on proximal policy optimization algorithm for autonomous driving of off-road vehicle. In International Conference on Autonomous Unmanned Systems, pages 2692–2704. Springer, 2022.
  68. Learning robust control policies for end-to-end autonomous driving from data-driven simulation. IEEE Robotics and Automation Letters, 5(2):1143–1150, 2020.
  69. Segmented encoding for sim2real of rl-based end-to-end autonomous driving. In 2022 IEEE Intelligent Vehicles Symposium (IV), pages 1290–1296, 2022.
  70. Autonomous merging onto the highway using lstm neural network. In 2023 11th RSI International Conference on Robotics and Mechatronics (ICRoM), pages 574–579, 2023.
  71. Lane change decision control of autonomous vehicle based on a3c algorithm. In Cognitive Systems and Information Processing: 8th International Conference, ICCSIP 2023, Revised Selected Papers. Communications in Computer and Information Science, volume 1918, pages 217–229, 2024.
  72. Integrated decision and control: toward interpretable and computationally efficient driving intelligence. IEEE transactions on cybernetics, 53(2):859–873, 2022.
  73. Pink noise is all you need: Colored noise exploration in deep reinforcement learning. In The Eleventh International Conference on Learning Representations, 2022.
  74. Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms. Advances in neural information processing systems, 34:13406–13418, 2021.
  75. Small dataset, big gains: Enhancing reinforcement learning by offline pre-training with model-based augmentation. In Computer Sciences & Mathematics Forum, volume 9, page 4. MDPI, 2024.
  76. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1):3–20, 2020.
  77. Virtual to real reinforcement learning for autonomous driving. arXiv preprint arXiv:1704.03952, 2017.
  78. Dense reinforcement learning for safety validation of autonomous vehicles. Nature, 615(7953):620–627, 2023.
  79. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
  80. Safe-state enhancement method for autonomous driving via direct hierarchical reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 24(9):9966–9983, 2023.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com