Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Path Planning based on 2D Object Bounding-box (2402.14933v1)

Published 22 Feb 2024 in cs.RO and cs.AI

Abstract: The implementation of Autonomous Driving (AD) technologies within urban environments presents significant challenges. These challenges necessitate the development of advanced perception systems and motion planning algorithms capable of managing situations of considerable complexity. Although the end-to-end AD method utilizing LiDAR sensors has achieved significant success in this scenario, we argue that its drawbacks may hinder its practical application. Instead, we propose the vision-centric AD as a promising alternative offering a streamlined model without compromising performance. In this study, we present a path planning method that utilizes 2D bounding boxes of objects, developed through imitation learning in urban driving scenarios. This is achieved by integrating high-definition (HD) map data with images captured by surrounding cameras. Subsequent perception tasks involve bounding-box detection and tracking, while the planning phase employs both local embeddings via Graph Neural Network (GNN) and global embeddings via Transformer for temporal-spatial feature aggregation, ultimately producing optimal path planning information. We evaluated our model on the nuPlan planning task and observed that it performs competitively in comparison to existing vision-centric methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. nuScenes: A multimodal dataset for autonomous driving. pages 11621–11631.
  2. End-to-end driving via conditional imitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 4693–4700. IEEE.
  3. VectorNet: Encoding HD maps and agent dynamics from vectorized representation.
  4. Rich feature hierarchies for accurate object detection and semantic segmentation. pages 580–587.
  5. Real-time dynamic planning and tracking control of auto-docking for efficient wireless charging. 8(3):2123–2134.
  6. A survey of deep learning techniques for autonomous driving. 37(3):362–386. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/rob.21918.
  7. Test methodology for rain influence on automotive surround sensors. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pages 2242–2247. ISSN: 2153-0017.
  8. Mask r-CNN. pages 2961–2969.
  9. One thousand and one hours: Self-driving motion prediction dataset. In Proceedings of the 2020 Conference on Robot Learning, pages 409–418. PMLR. ISSN: 2640-3498.
  10. ST-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning.
  11. Planning-oriented autonomous driving.
  12. Autonomy 2.0: Why is self-driving always 5 years away?
  13. Continuous inverse optimal control with locally optimal examples.
  14. Focal loss for dense object detection. pages 2980–2988.
  15. YOLO-BEV: Generating bird’s-eye view in the same way as 2d object detection.
  16. SSD: Single shot MultiBox detector. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – ECCV 2016, Lecture Notes in Computer Science, pages 21–37. Springer International Publishing.
  17. You only look once: Unified, real-time object detection. pages 779–788.
  18. Urban driver: Learning to drive from real-world demonstrations using policy gradients.
  19. Laser-induced damage threshold of camera sensors and micro-optoelectromechanical systems. 56(3):034108. Publisher: SPIE.
  20. Motion planning for autonomous driving: The state of the art and future perspectives. 8(6):3692–3711. Conference Name: IEEE Transactions on Intelligent Vehicles.
  21. SafetyNet: Safe planning for real-world self-driving vehicles using machine-learned policies.
  22. Attention-based interrelation modeling for explainable automated driving. 8(2):1564–1573.
  23. Navigating car-like robots in unstructured environments using an obstacle sensitive cost function. In 2008 IEEE Intelligent Vehicles Symposium, pages 787–791. ISSN: 1931-0587.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com