Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High-speed Autonomous Racing using Trajectory-aided Deep Reinforcement Learning (2306.07003v1)

Published 12 Jun 2023 in cs.RO

Abstract: The classical method of autonomous racing uses real-time localisation to follow a precalculated optimal trajectory. In contrast, end-to-end deep reinforcement learning (DRL) can train agents to race using only raw LiDAR scans. While classical methods prioritise optimization for high-performance racing, DRL approaches have focused on low-performance contexts with little consideration of the speed profile. This work addresses the problem of using end-to-end DRL agents for high-speed autonomous racing. We present trajectory-aided learning (TAL) that trains DRL agents for high-performance racing by incorporating the optimal trajectory (racing line) into the learning formulation. Our method is evaluated using the TD3 algorithm on four maps in the open-source F1Tenth simulator. The results demonstrate that our method achieves a significantly higher lap completion rate at high speeds compared to the baseline. This is due to TAL training the agent to select a feasible speed profile of slowing down in the corners and roughly tracking the optimal trajectory.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. J. Betz, H. Zheng, A. Liniger, U. Rosolia, P. Karle, M. Behl, V. Krovi, and R. Mangharam, “Autonomous vehicles on the edge: A survey on autonomous vehicle racing,” IEEE Open Journal of Intelligent Transportation Systems, 2022.
  2. R. Wang, “Data-driven system identification and optimal control framework for grand-prix style autonomous racing,” Ph.D. dissertation, Clemson University, 2021.
  3. A. Heilmeier, A. Wischnewski, L. Hermansdorfer, J. Betz, M. Lienkamp, and B. Lohmann, “Minimum curvature trajectory planning and control for an autonomous race car,” Vehicle System Dynamics, vol. 58, no. 10, pp. 1497–1527, 10 2020.
  4. C. H. Walsh and S. Karaman, “Cddt: Fast approximate 2d ray casting for accelerated localization,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 3677–3684.
  5. T. Zhang, G. Kahn, S. Levine, and P. Abbeel, “Learning deep control policies for autonomous aerial vehicles with mpc-guided policy search,” in 2016 IEEE international conference on robotics and automation (ICRA).   IEEE, 2016, pp. 528–535.
  6. N. Hamilton, P. Musau, D. M. Lopez, and T. T. Johnson, “Zero-shot policy transfer in autonomous racing: Reinforcement learning vs imitation learning,” in 2022 IEEE International Conference on Assured Autonomy (ICAA).   IEEE, 2022, pp. 11–20.
  7. R. Ivanov, T. J. Carpenter, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, “Case study: verifying the safety of an autonomous racing car with a neural network controller,” in Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control, 2020, pp. 1–7.
  8. M. Bosello, R. Tse, and G. Pau, “Train in austria, race in montecarlo: Generalized rl for cross-track f1 tenth lidar-based races,” in 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC).   IEEE, 2022, pp. 290–298.
  9. A. Wischnewski, M. Geisslinger, J. Betz, T. Betz, F. Fent, A. Heilmeier, L. Hermansdorfer, T. Herrmann, S. Huch, P. Karle et al., “Indy autonomous challenge-autonomous race cars at the handling limits,” in 12th International Munich Chassis Symposium 2021.   Springer, 2022, pp. 163–182.
  10. M. O’Kelly, H. Zheng, A. Jain, J. Auckley, K. Luong, and R. Mangharam, “Tunercar: A superoptimization toolchain for autonomous racing,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 5356–5362.
  11. A. Tătulea-Codrean, T. Mariani, and S. Engell, “Design and simulation of a machine-learning and model predictive control approach to autonomous race driving for the f1/10 platform,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 6031–6036, 2020.
  12. R. C. Coulter, “Implementation of the pure pursuit path tracking algorithm,” Carnegie-Mellon UNIV Pittsburgh PA Robotics INST, Tech. Rep., 1992.
  13. J. Becker, N. Imholz, L. Schwarzenbach, E. Ghignone, N. Baumann, and M. Magno, “Model-and acceleration-based pursuit controller for high-performance autonomous racing,” arXiv preprint arXiv:2209.04346, 2022.
  14. E. Chisari, A. Liniger, A. Rupenyan, L. Van Gool, and J. Lygeros, “Learning from simulation, racing in reality,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 8046–8052.
  15. P. Cai, X. Mei, L. Tai, Y. Sun, and M. Liu, “High-Speed Autonomous Drifting with Deep Reinforcement Learning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1247–1254, 4 2020.
  16. E. Ghignone, N. Baumann, M. Boss, and M. Magno, “Tc-driver: Trajectory conditioned driving for robust autonomous racing–a reinforcement learning approach,” arXiv preprint arXiv:2205.09370, 2022.
  17. T. Dwivedi, T. Betz, F. Sauerbeck, P. Manivannan, and M. Lienkamp, “Continuous control of autonomous vehicles using plan-assisted deep reinforcement learning,” in 2022 22nd International Conference on Control, Automation and Systems (ICCAS).   IEEE, 2022, pp. 244–250.
  18. M. Jaritz, R. De Charette, M. Toromanoff, E. Perot, and F. Nashashibi, “End-to-end race driving with deep reinforcement learning,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 2070–2075.
  19. B. Evans, J. Betz, H. Zheng, H. A. Engelbrecht, R. Mangharam, and H. W. Jordaan, “Accelerating online reinforcement learning via supervisory safety systems,” arXiv preprint arXiv:2209.11082, 2022.
  20. X. Sun, M. Zhou, Z. Zhuang, S. Yang, J. Betz, and R. Mangharam, “A benchmark comparison of imitation learning-based control policies for autonomous racing,” arXiv preprint arXiv:2209.15073, 2022.
  21. P. Musau, N. Hamilton, D. M. Lopez, P. Robinette, and T. T. Johnson, “On using real-time reachability for the safety assurance of machine learning controllers,” in 2022 IEEE International Conference on Assured Autonomy (ICAA).   IEEE, 2022, pp. 1–10.
  22. A. Brunnbauer, L. Berducci, A. Brandstatter, M. Lechner, R. Hasani, D. Rus, and R. Grosu, “Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing,” 2022 International Conference on Robotics and Automation (ICRA), pp. 7513–7520, 5 2022.
  23. R. Zhang, J. Hou, G. Chen, Z. Li, J. Chen, and A. Knoll, “Residual policy learning facilitates efficient model-free autonomous racing,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11 625–11 632, 2022.
  24. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 2016.
  25. S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International conference on machine learning.   PMLR, 2018, pp. 1587–1596.
  26. M. O’Kelly, H. Zheng, D. Karthik, and R. Mangharam, “F1tenth: An open-source evaluation environment for continuous control and reinforcement learning,” Proceedings of Machine Learning Research, vol. 123, 2020.
  27. M. Althoff, M. Koschi, and S. Manzinger, “CommonRoad: Composable benchmarks for motion planning on roads,” in 2017 IEEE Intelligent Vehicles Symposium (IV).   IEEE, Jun. 2017.
  28. H. Zheng, J. Betz, and R. Mangharam, “Gradient-free multi-domain optimization for autonomous systems,” arXiv preprint arXiv:2202.13525, 2022.
  29. R. Trumpp, D. Hoornaert, and M. Caccamo, “Residual policy learning for vehicle control of autonomous racing cars,” arXiv preprint arXiv:2302.07035, 2023.
Citations (14)

Summary

We haven't generated a summary for this paper yet.