Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Reinforcement Learning-based Multi-objective Path Planning on the Off-road Terrain Environment for Ground Vehicles

Published 23 May 2023 in cs.RO and cs.AI | (2305.13783v2)

Abstract: Due to the vastly different energy consumption between up-slope and down-slope, a path with the shortest length on a complex off-road terrain environment (2.5D map) is not always the path with the least energy consumption. For any energy-sensitive vehicle, realizing a good trade-off between distance and energy consumption in 2.5D path planning is significantly meaningful. In this paper, we propose a deep reinforcement learning-based 2.5D multi-objective path planning method (DMOP). The DMOP can efficiently find the desired path in three steps: (1) Transform the high-resolution 2.5D map into a small-size map. (2) Use a trained deep Q network (DQN) to find the desired path on the small-size map. (3) Build the planned path to the original high-resolution map using a path-enhanced method. In addition, the hybrid exploration strategy and reward shaping theory are applied to train the DQN. The reward function is constructed with the information of terrain, distance, and border. Simulation results show that the proposed method can finish the multi-objective 2.5D path planning task with significantly high efficiency. With similar planned paths, the speed of the proposed method is more than 100 times faster than that of the A* method and 30 times faster than that of H3DM method. Also, simulation proves that the method has powerful reasoning capability that enables it to perform arbitrary untrained planning tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. G. Wu, K. Boriboonsomsin, and M. J. Barth, “Development and evaluation of an intelligent energy-management strategy for plug-in hybrid electric vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 3, pp. 1091–1100, 2014.
  2. A. Affanni, A. Bellini, G. Franceschini, P. Guglielmi, and C. Tassoni, “Battery choice and management for new-generation electric vehicles,” IEEE Transactions on Industrial Electronics, vol. 52, no. 5, pp. 1343–1349, 2005.
  3. S. S. Williamson, A. Emadi, and K. Rajashekara, “Comprehensive efficiency modeling of electric traction motor drives for hybrid electric vehicle propulsion applications,” IEEE Transactions on Vehicular Technology, vol. 56, no. 4, pp. 1561–1572, 2007.
  4. D. W. Stanton, “Systematic development of highly efficient and clean engines to meet future commercial vehicle greenhouse gas regulations,” SAE International Journal of Engines, vol. 6, no. 2013-01-2421, pp. 1395–1480, 2013.
  5. M. Brandao, K. Hashimoto, J. Santos-Victor, and A. Takanishi, “Footstep planning for slippery and slanted terrain using human-inspired models,” IEEE Transactions on Robotics, vol. 32, no. 4, pp. 868–879, 2016.
  6. R. Raja and A. Dutta, “Path planning in dynamic environment for a rover using a* and potential field method,” in 2017 18th International Conference on Advanced Robotics (ICAR), 2017, pp. 578–582.
  7. M. Paton, M. P. Strub, T. Brown, R. J. Greene, J. Lizewski, V. Patel, J. D. Gammell, and I. A. D. Nesnas, “Navigation on the line: Traversability analysis and path planning for extreme-terrain rappelling rovers,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 7034–7041.
  8. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, pp. 354–359, 2017.
  9. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
  10. I. Munemasa, Y. Tomomatsu, K. Hayashi, and T. Takagi, “Deep reinforcement learning for recommender systems,” in 2018 international conference on information and communications technology (icoiact).   IEEE, 2018, pp. 226–233.
  11. C. Fan, L. Zeng, Y. Sun, and Y.-Y. Liu, “Finding key players in complex networks through deep reinforcement learning,” Nature machine intelligence, vol. 2, no. 6, pp. 317–324, 2020.
  12. V. N. Sichkar, “Reinforcement learning algorithms in global path planning for mobile robot,” in 2019 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM).   IEEE, 2019, pp. 1–5.
  13. R. Usami, Y. Kobashi, T. Onuma, and T. Maekawa, “Two-lane path planning of autonomous vehicles in 2.5d environments,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 2, pp. 281–293, 2020.
  14. L. Zhou, L. Yang, and H. Tang, “Research on path planning algorithm and its application based on terrain slope for slipping prediction in complex terrain environment,” in 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 2017, pp. 224–227.
  15. Y. Ding, H. Ma, and S. Li, “Path planning of omnidirectional mobile vehicle based on road condition,” in 2019 IEEE International Conference on Mechatronics and Automation (ICMA), 2019, pp. 1425–1429.
  16. D. Ugur and O. Bebek, “Fast and efficient terrain-aware motion planning for exploration rovers,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), 2021, pp. 1561–1567.
  17. H. Chen, F. Sun, M. Song, S. Li, and Y. Huang, “A novel navigation method, optimal for sloped terrain,” in Proceedings of the 10th World Congress on Intelligent Control and Automation, 2012, pp. 3623–3628.
  18. H. Inotsume, T. Kubota, and D. Wettergreen, “Robust path planning for slope traversing under uncertainty in slip prediction,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3390–3397, 2020.
  19. A. Hertle and C. Dornhege, “Efficient extensible path planning on 3d terrain using behavior modules,” in 2013 European Conference on Mobile Robots, 2013, pp. 94–99.
  20. K. Kwok and B. Driessen, “Path planning for complex terrain navigation via dynamic programming,” in Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), vol. 4, 1999, pp. 2941–2944 vol.4.
  21. G. Kamaras, P. Stamatopoulos, and S. Konstantopoulos, “Path planning for terrain of steep incline using bézier curves,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), 2020, pp. 101–105.
  22. L. Linhui, Z. Mingheng, G. Lie, and Z. Yibing, “Stereo vision based obstacle avoidance path-planning for cross-country intelligent vehicle,” in 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery, vol. 5, 2009, pp. 463–467.
  23. G. Glaubit, K. Kleeman, N. Law, J. Thomas, S. Gao, R. Peddi, E. Yel, and N. Bezzo, “Fast, safe, and proactive runtime planning and control of autonomous ground vehicles in changing environments,” in 2021 Systems and Information Engineering Design Symposium (SIEDS), 2021, pp. 1–6.
  24. G. Huang, X. Yuan, K. Shi, Z. Liu, and X. Wu, “A 3-d multi-object path planning method for electric vehicle considering the energy consumption and distance,” IEEE Transactions on Intelligent Transportation Systems, 2021.
  25. D. Ferguson, M. Likhachev, and A. Stentz, “A guide to heuristic-based path planning,” in Proceedings of the international workshop on planning under uncertainty for autonomous systems, international conference on automated planning and scheduling (ICAPS), 2005, pp. 9–18.
  26. Z. Nie and H. Zhao, “Research on robot path planning based on dijkstra and ant colony optimization,” in 2019 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), 2019, pp. 222–226.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.