Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LiDAR-based curb detection for ground truth annotation in automated driving validation (2312.00534v2)

Published 1 Dec 2023 in cs.CV

Abstract: Curb detection is essential for environmental awareness in Automated Driving (AD), as it typically limits drivable and non-drivable areas. Annotated data are necessary for developing and validating an AD function. However, the number of public datasets with annotated point cloud curbs is scarce. This paper presents a method for detecting 3D curbs in a sequence of point clouds captured from a LiDAR sensor, which consists of two main steps. First, our approach detects the curbs at each scan using a segmentation deep neural network. Then, a sequence-level processing step estimates the 3D curbs in the reconstructed point cloud using the odometry of the vehicle. From these 3D points of the curb, we obtain polylines structured following ASAM OpenLABEL standard. These detections can be used as pre-annotations in labelling pipelines to efficiently generate curb-related ground truth data. We validate our approach through an experiment in which different human annotators were required to annotate curbs in a group of LiDAR-based sequences with and without our automatically generated pre-annotations. The results show that the manual annotation time is reduced by 50.99% thanks to our detections, keeping the data quality level.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Y. Maalej, S. Sorour, A. Abdel-Rahim, and M. Guizani, “Vanets meet autonomous vehicles: Multimodal surrounding recognition using manifold alignment,” IEEE Access, vol. 6, pp. 29026–29040, 2018.
  2. N. Kalra and S. M. Paddock, “Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?,” Transportation Research Part A: Policy and Practice, vol. 94, pp. 182–193, 2016.
  3. A. Mujika, A. D. Fanlo, I. Tamayo, O. Senderos, J. Barandiaran, N. Aranjuelo, M. Nieto, and O. Otaegui, “Web-based video-assisted point cloud annotation for adas validation,” in The 24th International Conference on 3D Web Technology, pp. 1–9, 2019.
  4. L. M. Romero, J. A. Guerrero, and G. Romero, “Road curb detection: a historical survey,” Sensors, vol. 21, no. 21, p. 6952, 2021.
  5. T. Suleymanov, L. Kunze, and P. Newman, “Online inference and detection of curbs in partially occluded scenes with sparse lidar,” in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2693–2700, IEEE, 2019.
  6. P. Sun, X. Zhao, Z. Xu, R. Wang, and H. Min, “A 3d lidar data-based dedicated road boundary detection algorithm for autonomous vehicles,” IEEE Access, vol. 7, pp. 29623–29638, 2019.
  7. G. Wang, J. Wu, R. He, and S. Yang, “A point cloud-based robust road curb detection and tracking method,” Ieee Access, vol. 7, pp. 24611–24625, 2019.
  8. G. Zhao and J. Yuan, “Curb detection and tracking using 3d-lidar scanner,” in 2012 19th IEEE International Conference on Image Processing, pp. 437–440, IEEE, 2012.
  9. D. Rato and V. Santos, “Lidar based detection of road boundaries using the density of accumulated point clouds and their gradients,” Robotics and Autonomous Systems, vol. 138, p. 103714, 2021.
  10. Y. Jung, M. Jeon, C. Kim, S.-W. Seo, and S.-W. Kim, “Uncertainty-aware fast curb detection using convolutional networks in point clouds,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 12882–12888, IEEE, 2021.
  11. I. Baek, T.-C. Tai, M. M. Bhat, K. Ellango, T. Shah, K. Fuseini, and R. R. Rajkumar, “Curbscan: Curb detection and tracking using multi-sensor fusion,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8, IEEE, 2020.
  12. T. Chen, B. Dai, D. Liu, J. Song, and Z. Liu, “Velodyne-based curb detection up to 50 meters away,” in 2015 IEEE Intelligent Vehicles Symposium (IV), pp. 241–248, IEEE, 2015.
  13. D. Bai, T. Cao, J. Guo, and B. Liu, “How to build a curb dataset with lidar data for autonomous driving,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 2576–2582, IEEE, 2022.
  14. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, pp. 234–241, Springer, 2015.
  15. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
  16. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “Semantickitti: A dataset for semantic scene understanding of lidar sequences,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9297–9307, 2019.
  17. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11621–11631, 2020.
  18. C. Sager, C. Janiesch, and P. Zschech, “A survey of image labelling for computer vision applications,” Journal of Business Analytics, vol. 4, no. 2, pp. 91–110, 2021.
  19. X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1907–1915, 2017.
  20. H. Xu, H. He, Y. Zhang, L. Ma, and J. Li, “A comparative study of loss functions for road segmentation in remotely sensed road datasets,” International Journal of Applied Earth Observation and Geoinformation, vol. 116, p. 103159, 2023.
  21. M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., “A density-based algorithm for discovering clusters in large spatial databases with noise.,” in kdd, vol. 96, pp. 226–231, 1996.
  22. A. Bucksch, R. C. Lindenbergh, and M. Menenti, “Skeltre-fast skeletonisation for imperfect point cloud data of botanic trees,” Eurographics, 2009.
  23. U. Ramer, “An iterative procedure for the polygonal approximation of plane curves,” Computer graphics and image processing, vol. 1, no. 3, pp. 244–256, 1972.
  24. D. H. Douglas and T. K. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature,” Cartographica: the international journal for geographic information and geovisualization, vol. 10, no. 2, pp. 112–122, 1973.
  25. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proceedings of the European conference on computer vision (ECCV), pp. 801–818, 2018.
  26. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  27. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
  28. A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, and A. A. Kalinin, “Albumentations: Fast and flexible image augmentations,” Information, vol. 11, no. 2, 2020.
  29. J. Fritsch, T. Kuehnl, and A. Geiger, “A new performance measure and evaluation benchmark for road detection algorithms,” in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), pp. 1693–1700, IEEE, 2013.
  30. J. B. Martirena, M. N. Doncel, A. C. Vidal, O. O. Madurga, J. F. Esnal, and M. G. Romay, “Automated annotation of lane markings using lidar and odometry,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 4, pp. 3115–3125, 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.