Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancements in 3D Lane Detection Using LiDAR Point Clouds: From Data Collection to Model Development (2309.13596v3)

Published 24 Sep 2023 in cs.CV

Abstract: Advanced Driver-Assistance Systems (ADAS) have successfully integrated learning-based techniques into vehicle perception and decision-making. However, their application in 3D lane detection for effective driving environment perception is hindered by the lack of comprehensive LiDAR datasets. The sparse nature of LiDAR point cloud data prevents an efficient manual annotation process. To solve this problem, we present LiSV-3DLane, a large-scale 3D lane dataset that comprises 20k frames of surround-view LiDAR point clouds with enriched semantic annotation. Unlike existing datasets confined to a frontal perspective, LiSV-3DLane provides a full 360-degree spatial panorama around the ego vehicle, capturing complex lane patterns in both urban and highway environments. We leverage the geometric traits of lane lines and the intrinsic spatial attributes of LiDAR data to design a simple yet effective automatic annotation pipeline for generating finer lane labels. To propel future research, we propose a novel LiDAR-based 3D lane detection model, LiLaDet, incorporating the spatial geometry learning of the LiDAR point cloud into Bird's Eye View (BEV) based lane identification. Experimental results indicate that LiLaDet outperforms existing camera- and LiDAR-based approaches in the 3D lane detection task on the K-Lane dataset and our LiSV-3DLane.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. D.-H. Paek, S.-H. Kong, and K. T. Wijaya, “K-lane: Lidar lane dataset and benchmark for urban roads and highways,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPRW), 2022, pp. 4450–4459.
  2. D. Bai, T. Cao, J. Guo, and B. Liu, “How to build a curb dataset with lidar data for autonomous driving,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 2576–2582.
  3. J. Kini, A. Mian, and M. Shah, “3dmodt: Attention-guided affinities for joint detection & tracking in 3d point clouds,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 841–848.
  4. N. Garnett, R. Cohen, T. Pe’er, R. Lahav, and D. Levi, “3d-lanenet: End-to-end 3d multiple lane detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2921–2930.
  5. N. Efrat, M. Bluvstein, S. Oron, D. Levi, N. Garnett, and B. E. Shlomo, “3d-lanenet+: Anchor free lane detection using a semi-local representation,” arXiv preprint arXiv:2011.01535, 2020.
  6. Y. Guo, G. Chen, P. Zhao, W. Zhang, J. Miao, J. Wang, and T. E. Choe, “Gen-lanenet: A generalized and scalable approach for 3d lane detection,” in European Conference on Computer Vision (ECCV), 2020, pp. 666–681.
  7. L. Chen, C. Sima, Y. Li, Z. Zheng, J. Xu, X. Geng, H. Li, C. He, J. Shi, Y. Qiao et al., “Persformer: 3d lane detection via perspective transformer and the openlane benchmark,” in European Conference on Computer Vision (ECCV).   Springer, 2022, pp. 550–567.
  8. R. Wang, J. Qin, K. Li, Y. Li, D. Cao, and J. Xu, “Bev-lanedet: An efficient 3d lane detection based on virtual camera via key-points,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1002–1011.
  9. S. Huang, Z. Shen, Z. Huang, Z.-h. Ding, J. Dai, J. Han, N. Wang, and S. Liu, “Anchor3dlane: Learning to regress 3d anchors for monocular 3d lane detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 17 451–17 460.
  10. D.-H. Paek, K. T. Wijaya, and S.-H. Kong, “Row-wise lidar lane detection network with lane correlation refinement,” in 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2022, pp. 4328–4334.
  11. M. Hahner, C. Sakaridis, D. Dai, and L. Van Gool, “Fog simulation on real lidar point clouds for 3d object detection in adverse weather,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15 283–15 292.
  12. M. Hahner, C. Sakaridis, M. Bijelic, F. Heide, F. Yu, D. Dai, and L. Van Gool, “Lidar snowfall simulation for robust 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16 364–16 374.
  13. R. Liu, Z. Guan, Z. Yuan, A. Liu, T. Zhou, T. Kun, E. Li, C. Zheng, and S. Mei, “Learning to detect 3d lanes by shape matching and embedding,” in 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4280–4288.
  14. Z. Guan, R. Liu, Z. Yuan, A. Liu, K. Tang, T. Zhou, E. Li, C. Zheng, and S. Mei, “Flexible 3d lane detection by hierarchical shape matching,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), vol. 37, no. 1, 2023, pp. 694–701.
  15. F. Yan, M. Nie, X. Cai, J. Han, H. Xu, Z. Yang, C. Ye, Y. Fu, M. B. Mi, and L. Zhang, “Once-3dlanes: Building monocular 3d lane detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 143–17 152.
  16. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
  17. J. Gao, C. Sun, H. Zhao, Y. Shen, D. Anguelov, C. Li, and C. Schmid, “Vectornet: Encoding hd maps and agent dynamics from vectorized representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11 525–11 533.
  18. Q. Li, Y. Wang, Y. Wang, and H. Zhao, “Hdmapnet: An online hd map construction and evaluation framework,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 4628–4634.
  19. A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2019, pp. 12 697–12 705.
  20. S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, “Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2020, pp. 10 529–10 538.
  21. Z. Liu, H. Tang, A. Amini, X. Yang, H. Mao, D. L. Rus, and S. Han, “Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 2774–2781.
  22. L. Peng, Z. Chen, Z. Fu, P. Liang, and E. Cheng, “Bevsegformer: Bird’s eye view semantic segmentation from arbitrary camera rigs,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5935–5943.
  23. B. Yang, W. Luo, and R. Urtasun, “Pixor: Real-time 3d object detection from point clouds,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7652–7660.
  24. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  25. H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, and C. Feichtenhofer, “Multiscale vision transformers,” in Proceedings of the IEEE/CVF international conference on computer vision (CVPR), 2021, pp. 6824–6835.
  26. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18.   Springer, 2015, pp. 234–241.
  27. Y. Yan, Y. Mao, and B. Li, “Second: Sparsely embedded convolutional detection,” Sensors, vol. 18, no. 10, p. 3337, 2018.
  28. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 652–660.
  29. Y. Liu, J. Yan, F. Jia, S. Li, A. Gao, T. Wang, X. Zhang, and J. Sun, “Petrv2: A unified framework for 3d perception from multi-camera images,” arXiv preprint arXiv:2206.01256, 2022.
  30. Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y. Qiao, and J. Dai, “Bevformer: Learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers,” in European conference on computer vision (ECCV).   Springer, 2022, pp. 1–18.
  31. Y. Li, A. W. Yu, T. Meng, B. Caine, J. Ngiam, D. Peng, J. Shen, Y. Lu, D. Zhou, Q. V. Le et al., “Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 17 182–17 191.
  32. R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1440–1448.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Runkai Zhao (8 papers)
  2. Yuwen Heng (5 papers)
  3. Heng Wang (136 papers)
  4. Yuanda Gao (18 papers)
  5. Shilei Liu (18 papers)
  6. Changhao Yao (1 paper)
  7. Jiawen Chen (24 papers)
  8. Weidong Cai (118 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.