PathRL: An End-to-End Path Generation Method for Collision Avoidance via Deep Reinforcement Learning (2310.13295v1)
Abstract: Robot navigation using deep reinforcement learning (DRL) has shown great potential in improving the performance of mobile robots. Nevertheless, most existing DRL-based navigation methods primarily focus on training a policy that directly commands the robot with low-level controls, like linear and angular velocities, which leads to unstable speeds and unsmooth trajectories of the robot during the long-term execution. An alternative method is to train a DRL policy that outputs the navigation path directly. However, two roadblocks arise for training a DRL policy that outputs paths: (1) The action space for potential paths often involves higher dimensions comparing to low-level commands, which increases the difficulties of training; (2) It takes multiple time steps to track a path instead of a single time step, which requires the path to predicate the interactions of the robot w.r.t. the dynamic environment in multiple time steps. This, in turn, amplifies the challenges associated with training. In response to these challenges, we propose PathRL, a novel DRL method that trains the policy to generate the navigation path for the robot. Specifically, we employ specific action space discretization techniques and tailored state space representation methods to address the associated challenges. In our experiments, PathRL achieves better success rates and reduces angular rotation variability compared to other DRL navigation methods, facilitating stable and smooth robot movement. We demonstrate the competitive edge of PathRL in both real-world scenarios and multiple challenging simulation environments.
- X. Xiao, B. Liu, G. Warnell, and P. Stone, “Motion planning and control for mobile robot navigation using machine learning: A survey,” Autonomous Robots, vol. 46, no. 5, pp. 569–597, June 2022.
- L. Wang, J. Liu, H. Shao, W. Wang, R. Chen, Y. Liu, and S. L. Waslander, “Efficient reinforcement learning for autonomous driving with parameterized skills and priors,” arXiv preprint arXiv:2305.04412, 2023.
- M. Coggan, “Exploration and exploitation in reinforcement learning,” Research supervised by Prof. Doina Precup, CRA-W DMP Project at McGill University, 2004.
- R. C. Coulter, “Implementation of the pure pursuit path tracking algorithm,” Carnegie-Mellon UNIV Pittsburgh PA Robotics INST, Tech. Rep., 1992.
- T. Hellstrom and O. Ringdahl, “Follow the past: a path-tracking algorithm for autonomous vehicles,” International journal of vehicle autonomous systems, vol. 4, no. 2-4, pp. 216–224, 2006.
- K. C. Koh and H. S. Cho, “A smooth path tracking algorithm for wheeled mobile robots with dynamic constraints,” Journal of Intelligent and Robotic Systems, vol. 24, pp. 367–385, 1999.
- I. S. MacKenzie, “Human-computer interaction: An empirical research perspective,” 2012.
- G. Chen, S. Yao, J. Ma, L. Pan, Y. Chen, P. Xu, J. Ji, and X. Chen, “Distributed non-communicating multi-robot collision avoidance via map-based deep reinforcement learning,” Sensors, vol. 20, no. 17, p. 4836, 2020.
- R. S. Sutton, D. Precup, and S. Singh, “Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning,” Artificial intelligence, vol. 112, no. 1-2, pp. 181–211, 1999.
- Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in Proceedings of the 26th annual international conference on machine learning, 2009, pp. 41–48.
- T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning. PMLR, 2018, pp. 1861–1870.
- J. Merel, L. Hasenclever, A. Galashov, A. Ahuja, V. Pham, G. Wayne, Y. W. Teh, and N. Heess, “Neural probabilistic motor primitives for humanoid control,” arXiv preprint arXiv:1811.11711, 2018.
- J. Merel, S. Tunyasuvunakool, A. Ahuja, Y. Tassa, L. Hasenclever, V. Pham, T. Erez, G. Wayne, and N. Heess, “Catch & carry: reusable neural controllers for vision-guided whole-body tasks,” ACM Transactions on Graphics (TOG), vol. 39, no. 4, pp. 39–1, 2020.
- C. Rösmann, W. Feiten, T. Wösch, F. Hoffmann, and T. Bertram, “Trajectory modification considering dynamic constraints of autonomous robots,” in ROBOTIK 2012; 7th German Conference on Robotics. VDE, 2012, pp. 1–6.
- J. v. d. Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance,” in Robotics research. Springer, 2011, pp. 3–19.
- D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,” Physical review E, vol. 51, no. 5, p. 4282, 1995.
- A. Amini, G. Rosman, S. Karaman, and D. Rus, “Variational end-to-end navigation and localization,” in 2019 International Conference on Robotics and Automation (ICRA), May 2019, pp. 8958–8964.
- H. Ma, Y. Wang, L. Tang, S. Kodagoda, and R. Xiong, “Towards navigation without precise localization: Weakly supervised learning of goal-directed navigation cost map,” June 2019.
- L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 31–36.
- S. Yao, G. Chen, Q. Qiu, J. Ma, X. Chen, and J. Ji, “Crowd-aware robot navigation for pedestrians with multiple collision avoidance strategies via map-based deep reinforcement learning,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 8144–8150.
- K. Bektaş and H. I. Bozma, “Apf-rl: Safe mapless navigation in unknown environments,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 7299–7305.
- A. Faust, K. Oslund, O. Ramirez, A. Francis, L. Tapia, M. Fiser, and J. Davidson, “Prm-rl: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning,” in 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018, pp. 5113–5120.
- L. Zhang, Z. Hou, J. Wang, Z. Liu, and W. Li, “Robot navigation with reinforcement learned path generation and fine-tuned motion control,” arXiv preprint arXiv:2210.10639, 2022.
- Q. Qiu, S. Yao, J. Wang, J. Ma, G. Chen, and J. Ji, “Learning to socially navigate in pedestrian-rich environments with interaction capacity,” arXiv preprint arXiv:2203.16154, 2022.
- M. Moussaïd, D. Helbing, S. Garnier, A. Johansson, M. Combe, and G. Theraulaz, “Experimental study of the behavioural mechanisms underlying self-organization in human crowds,” Proceedings of the Royal Society B: Biological Sciences, vol. 276, no. 1668, pp. 2755–2762, 2009.
- Z. Xie and P. Dames, “Drl-vo: Learning to navigate through crowded dynamic scenes using velocity obstacles,” IEEE Transactions on Robotics, pp. 1–20, 2023.
- Y. Duan, J. Peng, Y. Zhang, J. Ji, and Y. Zhang, “Pfilter: Building persistent maps through feature filtering for fast and accurate lidar-based slam,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 11 087–11 093.
- Wenhao Yu (139 papers)
- Jie Peng (100 papers)
- Quecheng Qiu (5 papers)
- Hanyu Wang (42 papers)
- Lu Zhang (373 papers)
- Jianmin Ji (55 papers)