Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analysis of LiDAR Configurations on Off-road Semantic Segmentation Performance (2306.16551v1)

Published 28 Jun 2023 in cs.CV, cs.RO, and eess.IV

Abstract: This paper investigates the impact of LiDAR configuration shifts on the performance of 3D LiDAR point cloud semantic segmentation models, a topic not extensively studied before. We explore the effect of using different LiDAR channels when training and testing a 3D LiDAR point cloud semantic segmentation model, utilizing Cylinder3D for the experiments. A Cylinder3D model is trained and tested on simulated 3D LiDAR point cloud datasets created using the Mississippi State University Autonomous Vehicle Simulator (MAVS) and 32, 64 channel 3D LiDAR point clouds of the RELLIS-3D dataset collected in a real-world off-road environment. Our experimental results demonstrate that sensor and spatial domain shifts significantly impact the performance of LiDAR-based semantic segmentation models. In the absence of spatial domain changes between training and testing, models trained and tested on the same sensor type generally exhibited better performance. Moreover, higher-resolution sensors showed improved performance compared to those with lower-resolution ones. However, results varied when spatial domain changes were present. In some cases, the advantage of a sensor's higher resolution led to better performance both with and without sensor domain shifts. In other instances, the higher resolution resulted in overfitting within a specific domain, causing a lack of generalization capability and decreased performance when tested on data with different sensor configurations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (33)
  1. Carballo, A., Lambert, J., Monrroy, A., Wong, D., Narksri, P., Kitsukawa, Y., Takeuchi, E., Kato, S., and Takeda, K., “Libre: The multiple 3d lidar dataset,” in [2020 IEEE Intelligent Vehicles Symposium (IV) ], 1094–1101 (2020).
  2. Viswanath, K., Singh, K., Jiang, P., Sujit, P., and Saripalli, S., “Offseg: A semantic segmentation framework for off-road driving,” in [2021 IEEE 17th International Conference on Automation Science and Engineering (CASE) ], 354–359 (2021).
  3. Kim, P., Chen, J., and Cho, Y. K., “Autonomous mobile robot localization and mapping for unknown construction environments,” in [Construction Research Congress 2018 ], 147–156 (2018).
  4. Chen, J. and Cho, Y. K., “Crackembed: Point feature embedding for crack segmentation from disaster site point clouds with anomaly detection,” Advanced Engineering Informatics 52, 101550 (2022).
  5. Min, C., Jiang, W., Zhao, D., Xu, J., Xiao, L., Nie, Y., and Dai, B., “Orfd: A dataset and benchmark for off-road freespace detection,” in [2022 International Conference on Robotics and Automation (ICRA) ], 2532–2538 (2022).
  6. Zhong, C., Li, B., and Wu, T., “Off-road drivable area detection: A learning-based approach exploiting lidar reflection texture information,” Remote Sensing 15(1) (2023).
  7. Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J., “Semantickitti: A dataset for semantic scene understanding of lidar sequences,” (2019).
  8. Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O., “nuscenes: A multimodal dataset for autonomous driving,” (2020).
  9. Pan, Y., Gao, B., Mei, J., Geng, S., Li, C., and Zhao, H., “Semanticposs: A point cloud dataset with large quantity of dynamic instances,” (2020).
  10. Jiang, P., Osteen, P., Wigness, M., and Saripalli, S., “Rellis-3d dataset: Data, benchmarks and analysis,” (2020).
  11. Hudson, C., Goodin, C., Miller, Z., Wheeler, W., and Carruth, D., “Mississippi state university autonomous vehicle simulation library,” in [Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium ], 11–13 (2020).
  12. Zhu, X., Zhou, H., Wang, T., Hong, F., Ma, Y., Li, W., Li, H., and Lin, D., “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” in [Proceedings of the IEEE/CVF conference on computer vision and pattern recognition ], 9939–9948 (2021).
  13. Royo, S. and Ballesta-Garcia, M., “An overview of lidar imaging systems for autonomous vehicles,” Applied Sciences 9(19) (2019).
  14. Chen, J. and Cho, Y., “Real-time 3d mobile mapping for the built environment,” in [33rd International Symposium on Automation and Robotics in Construction (ISARC) ], (July 2016).
  15. Chen, J., Fang, Y., Cho, Y. K., and Kim, C., “Principal axes descriptor for automated construction-equipment classification from point clouds,” Journal of Computing in Civil Engineering 31(2), 04016058 (2017).
  16. Chen, J., Kira, Z., and Cho, Y. K., “Lrgnet: Learnable region growing for class-agnostic point cloud segmentation,” IEEE Robotics and Automation Letters 6(2), 2799–2806 (2021).
  17. Hirose, N., Sadeghian, A., Vázquez, M., Goebel, P., and Savarese, S., “Gonet: A semi-supervised deep learning approach for traversability estimation,” in [2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ], 3044–3051, IEEE (2018).
  18. Chen, J., Kim, P., Cho, Y. K., and Ueda, J., “Object-sensitive potential fields for mobile robot navigation and mapping in indoor environments,” in [2018 15th International Conference on Ubiquitous Robots (UR) ], 328–333 (June 2018).
  19. Price, L. C., Chen, J., and Cho, Y. K., “Dynamic crane workspace update for collision avoidance during blind lift operations,” in [Proceedings of the 18th International Conference on Computing in Civil and Building Engineering ], Toledo Santos, E. and Scheer, S., eds., 959–970, Springer International Publishing, Cham (2020).
  20. Hu, H., Liu, Z., Chitlangia, S., Agnihotri, A., and Zhao, D., “Investigating the impact of multi-lidar placement on object detection for autonomous driving,” in [Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ], 2550–2559 (June 2022).
  21. Kim, J., Jeong, J., Shin, Y.-S., Cho, Y., Roh, H., and Kim, A., “Lidar configuration comparison for urban mapping system,” in [2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) ], 854–857 (2017).
  22. Zhang, W., Liu, N., and Zhang, Y., “Learn to navigate maplessly with varied lidar configurations: A support point-based approach,” IEEE Robotics and Automation Letters 6(2), 1918–1925 (2021).
  23. Mou, S., Chang, Y., Wang, W., and Zhao, D., “An optimal lidar configuration approach for self-driving cars,” CoRR abs/1805.07843 (2018).
  24. Qi, C. R., Su, H., Mo, K., and Guibas, L. J., “Pointnet: Deep learning on point sets for 3d classification and segmentation,” arXiv preprint arXiv:1612.00593 (2016).
  25. Chen, J., Cho, Y. K., and Ueda, J., “Sampled-point network for classification of deformed building element point clouds,” in [2018 IEEE International Conference on Robotics and Automation (ICRA) ], 2164–2169 (2018).
  26. Hu, Z., Bai, X., Shang, J., Zhang, R., Dong, J., Wang, X., Sun, G., Fu, H., and Tai, C.-L., “Vmnet: Voxel-mesh network for geodesic-aware 3d semantic segmentation,” in [Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) ], 15488–15498 (October 2021).
  27. Zhao, H., Jiang, L., Jia, J., Torr, P. H., and Koltun, V., “Point transformer,” in [Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) ], 16259–16268 (October 2021).
  28. Chen, J., Cho, Y. K., and Kira, Z., “Multi-view incremental segmentation of 3-d point clouds for mobile robots,” IEEE Robotics and Automation Letters 4(2), 1240–1246 (2019).
  29. Yajima, Y., Kim, S., Chen, J. D., and Cho, Y., “Fast online incremental segmentation of 3d point clouds from disaster sites,” in [Proceedings of the 38th International Symposium on Automation and Robotics in Construction (ISARC) ], 341–348, International Association for Automation and Robotics in Construction (IAARC), Dubai, UAE (November 2021).
  30. Sharma, S., Dabbiru, L., Hannis, T., Mason, G., Carruth, D. W., Doude, M., Goodin, C., Hudson, C., Ozier, S., Ball, J. E., and Tang, B., “Cat: Cavs traversability dataset for off-road autonomous driving,” IEEE Access 10, 24759–24768 (2022).
  31. Dabbiru, L., Goodin, C., Scherrer, N., and Carruth, D., “Lidar data segmentation in off-road environment using convolutional neural networks (cnn),” SAE International Journal of Advances and Current Practices in Mobility 2, 3288–3292 (apr 2020).
  32. Manivasagam, S., Wang, S., Wong, K., Zeng, W., Sazanovich, M., Tan, S., Yang, B., Ma, W.-C., and Urtasun, R., “Lidarsim: Realistic lidar simulation by leveraging the real world,” in [Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ], (June 2020).
  33. Deschaud, J., “KITTI-CARLA: a kitti-like dataset generated by CARLA simulator,” CoRR abs/2109.00892 (2021).
Citations (3)

Summary

We haven't generated a summary for this paper yet.