Holistic Parking Slot Detection with Polygon-Shaped Representations (2310.11629v1)
Abstract: Current parking slot detection in advanced driver-assistance systems (ADAS) primarily relies on ultrasonic sensors. This method has several limitations such as the need to scan the entire parking slot before detecting it, the incapacity of detecting multiple slots in a row, and the difficulty of classifying them. Due to the complex visual environment, vehicles are equipped with surround view camera systems to detect vacant parking slots. Previous research works in this field mostly use image-domain models to solve the problem. These two-stage approaches separate the 2D detection and 3D pose estimation steps using camera calibration. In this paper, we propose one-step Holistic Parking Slot Network (HPS-Net), a tailor-made adaptation of the You Only Look Once (YOLO)v4 algorithm. This camera-based approach directly outputs the four vertex coordinates of the parking slot in topview domain, instead of a bounding box in raw camera images. Several visible points and shapes can be proposed from different angles. A novel regression loss function named polygon-corner Generalized Intersection over Union (GIoU) for polygon vertex position optimization is also proposed to manage the slot orientation and to distinguish the entrance line. Experiments show that HPS-Net can detect various vacant parking slots with a F1-score of 0.92 on our internal Valeo Parking Slots Dataset (VPSD) and 0.99 on the public dataset PS2.0. It provides a satisfying generalization and robustness in various parking scenarios, such as indoor (F1: 0.86) or paved ground (F1: 0.91). Moreover, it achieves a real-time detection speed of 17 FPS on Nvidia Drive AGX Xavier. A demo video can be found at https://streamable.com/75j7sj.
- A. A. Association, “Fact sheet: Active parking assist systems,” http://publicaffairsresources.aaa.biz/wp-content/uploads/2016/02/Automotive-Engineering-ADAS-Survey-Fact-Sheet-FINAL-3.pdf, 2015.
- N. L. Tenhundfeld, E. J. de Visser, A. J. Ries, V. S. Finomore, and C. C. Tossell, “Trust and Distrust of Automated Parking in a Tesla Model X,” Human Factors, vol. 62, no. 2, pp. 194–210, 2020, pMID: 31419163. [Online]. Available: https://doi.org/10.1177/0018720819865412
- A. Musabini, E. Bozbayir, H. Marcasuzaa, and O. A. I. Ramírez, “Park4U Mate: Context-Aware Digital Assistant for Personalized Autonomous Parking,” in 2021 IEEE Intelligent Vehicles Symposium (IV), 2021, pp. 724–731.
- BMW UX, “BMW Parking Assistant Complete Guide,” https://www.bmwux.com/bmw-performance-technology/bmw-technology/bmw-parking-assistant-complete-guide/, Sep. 2020.
- Mercedes-Benz Central Star Motor Cars Official Blog, “Learn How to Use Mercedes-Benz PARKTRONIC® with Active Parking Assist,” http://mercedesbenz.starmotorcars.com/blog/how-to-use-mercedes-benz-partronic-with-active-parking-assist/, Dec. 2021.
- Tesla, “Model X owner’s manual,” https://www.tesla.com/sites/default/files/model˙x˙owners˙manual˙north˙america˙en.pdf, 2020.
- Valeo, “Park4U® an automated parking system to park easily,” https://www.valeo.com/en/park4u-automated-parking/.
- Robert Bosch GmbH, “Parking Aid,” https://www.bosch-mobility-solutions.com/en/solutions/parking/parking-aid/.
- C. Wang, H. Zhang, M. Yang, X. Wang, L. Ye, and C. Guo, “Automatic Parking Based on a Bird’s Eye View Vision System,” Advances in Mechanical Engineering, vol. 6, p. 847406, jan 2014. [Online]. Available: http://journals.sagepub.com/doi/10.1155/2014/847406
- L. Zhang, J. Huang, X. Li, and L. Xiong, “Vision-Based Parking-Slot Detection: A DCNN-Based Approach and a Large-Scale Benchmark Dataset,” IEEE Transactions on Image Processing, vol. 27, no. 11, pp. 5350–5364, nov 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8412601/
- J. Huang, L. Zhang, Y. Shen, H. Zhang, S. Zhao, and Y. Yang, “DMPR-PS: A Novel Approach for Parking-Slot Detection Using Directional Marking-Point Regression,” in 2019 IEEE International Conference on Multimedia and Expo (ICME), vol. 2019-July. IEEE, jul 2019, pp. 212–217. [Online]. Available: https://ieeexplore.ieee.org/document/8784735/
- Z. Wu, W. Sun, M. Wang, X. Wang, L. Ding, and F. Wang, “PSDet: Efficient and Universal Parking Slot Detection,” IEEE Intelligent Vehicles Symposium, Proceedings, pp. 290–297, 2020.
- W. Li, H. Cao, J. Liao, J. Xia, L. Cao, and A. Knoll, “Parking Slot Detection on Around-View Images Using DCNN,” Frontiers in Neurorobotics, vol. 14, no. July, pp. 1–9, 2020.
- Y. Ma, Y. Liu, L. Zhang, Y. Cao, S. Guo, and H. Li, “Research Review on Parking Space Detection Method,” Symmetry, vol. 13, no. 1, p. 128, jan 2021. [Online]. Available: https://www.mdpi.com/2073-8994/13/1/128
- H. G. Jung, D. S. Kim, P. J. Yoon, and J. Kim, “Structure analysis based parking slot marking recognition for semi-automatic parking system,” in Structural, Syntactic, and Statistical Pattern Recognition, D.-Y. Yeung, J. T. Kwok, A. Fred, F. Roli, and D. de Ridder, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 384–393.
- Valeo, “Valeo Automated Valet Parking ,” https://www.valeo.com/en/valeo-automated-valet-parking/.
- J. K. Suhr and H. G. Jung, “Sensor fusion-based vacant parking slot detection and tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 1, pp. 21–36, 2014.
- K. Hamada, Z. Hu, M. Fan, and H. Chen, “Surround view based parking lot detection and tracking,” IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2015-Augus, no. Iv, pp. 1106–1111, 2015.
- H. Do and J. Y. Choi, “Context-based parking slot detection with a realistic dataset,” IEEE Access, vol. 8, pp. 171 551–171 559, 2020.
- W. Li, L. Cao, L. Yan, C. Li, X. Feng, and P. Zhao, “Vacant parking slot detection in the around view image based on deep learning,” Sensors (Switzerland), vol. 20, no. 7, pp. 1–22, 2020.
- J. K. Suhr and H. G. Jung, “End-to-End Trainable One-Stage Parking Slot Detection Integrating Global and Local Information,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–13, 2021.
- Q. H. Bui and J. K. Suhr, “CNN-based Two-Stage Parking Slot Detection Using Region-Specific Multi-Scale Feature Extraction,” aug 2021. [Online]. Available: http://arxiv.org/abs/2108.06185
- C. Xu and X. Hu, “Real Time Detection Algorithm of Parking Slot Based on Deep Learning and Fisheye Image,” Journal of Physics: Conference Series, vol. 1518, no. 1, 2020.
- D. Lee, J. Kwon, S. Oh, W. Zheng, H. J. Seo, D. Nister, and B. R. Hervas, “Object Detection Using Skewed Polygons Suitable For Parking Space Detection,” Patent, 2020, US 2020/0294310 A1. [Online]. Available: https://patents.google.com/patent/US20200294310A1/en
- J. Wang, T. Mei, B. Kong, and H. Wei, “An approach of lane detection based on inverse perspective mapping,” in 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014, pp. 35–38.
- A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” 2020. [Online]. Available: https://arxiv.org/abs/2004.10934
- H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union,” June 2019.
- Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bounding box regression,” 2019. [Online]. Available: https://arxiv.org/abs/1911.08287
- Nvidia, “NVIDIA Drive AGX Systems,” https://www.nvidia.com/en-us/self-driving-cars/drive-platform/hardware/, 2022.
- D. Misra, “Mish: A self regularized non-monotonic activation function,” in British Machine Vision Conference, 2020.
- Nvidia, “NVIDIA Jetson AGX Systems,” https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/, 2023.
- C. Min, J. Xu, L. Xiao, D. Zhao, Y. Nie, and B. Dai, “Attentional Graph Neural Network for Parking-Slot Detection,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3445–3450, apr 2021. [Online]. Available: https://ieeexplore.ieee.org/document/9372817/
- J. Philion and S. Fidler, “Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D,” in Proceedings of the European Conference on Computer Vision, 2020.
- B. Zhou and P. Krähenbühl, “Cross-view transformers for real-time map-view semantic segmentation,” in CVPR, 2022.
- A. W. Harley, Z. Fang, J. Li, R. Ambrus, and K. Fragkiadaki, “Simple-bev: What really matters for multi-sensor bev perception?” 2022. [Online]. Available: https://arxiv.org/abs/2206.07959
- F. Bartoccioni, E. Zablocki, A. Bursuc, P. Perez, M. Cord, and K. Alahari, “Lara: Latents and rays for multi-camera bird’s-eye-view semantic segmentation,” in 6th Annual Conference on Robot Learning, 2022. [Online]. Available: https://openreview.net/forum?id=abd˙D-iVjk0
- Lihao Wang (13 papers)
- Antonyo Musabini (4 papers)
- Christel Leonet (2 papers)
- Rachid Benmokhtar (2 papers)
- Amaury Breheret (2 papers)
- Chaima Yedes (1 paper)
- Fabian Burger (6 papers)
- Thomas Boulay (4 papers)
- Xavier Perrotton (8 papers)