Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reflectivity Is All You Need!: Advancing LiDAR Semantic Segmentation (2403.13188v2)

Published 19 Mar 2024 in cs.CV, cs.RO, and eess.IV

Abstract: LiDAR semantic segmentation frameworks predominantly use geometry-based features to differentiate objects within a scan. Although these methods excel in scenarios with clear boundaries and distinct shapes, their performance declines in environments where boundaries are indistinct, particularly in off-road contexts. To address this issue, recent advances in 3D segmentation algorithms have aimed to leverage raw LiDAR intensity readings to improve prediction precision. However, despite these advances, existing learning-based models face challenges in linking the complex interactions between raw intensity and variables such as distance, incidence angle, material reflectivity, and atmospheric conditions. Building upon our previous work, this paper explores the advantages of employing calibrated intensity (also referred to as reflectivity) within learning-based LiDAR semantic segmentation frameworks. We start by demonstrating that adding reflectivity as input enhances the LiDAR semantic segmentation model by providing a better data representation. Extensive experimentation with the Rellis-3d off-road dataset shows that replacing intensity with reflectivity results in a 4\% improvement in mean Intersection over Union (mIoU) for off-road scenarios. We demonstrate the potential benefits of using calibrated intensity for semantic segmentation in urban environments (SemanticKITTI) and for cross-sensor domain adaptation. Additionally, we tested the Segment Anything Model (SAM) using reflectivity as input, resulting in improved segmentation masks for LiDAR images.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. K. Viswanath, P. Jiang, S. PB, and S. Saripalli, “Off-road lidar intensity based semantic segmentation,” arXiv preprint arXiv:2401.01439, 2024.
  2. T. Cortinhal, G. Tzelepis, and E. Erdal Aksoy, “Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds,” in Advances in Visual Computing, G. Bebis, Z. Yin, E. Kim, J. Bender, K. Subr, B. C. Kwon, J. Zhao, D. Kalkofen, and G. Baciu, Eds.   Cham: Springer International Publishing, 2020, pp. 207–222.
  3. E. E. Aksoy, S. Baci, and S. Cavdar, “Salsanet: Fast road and vehicle segmentation in lidar point clouds for autonomous driving,” in 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp. 926–932.
  4. X. Zhu, H. Zhou, T. Wang, F. Hong, Y. Ma, W. Li, H. Li, and D. Lin, “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” arXiv preprint arXiv:2011.10033, 2020.
  5. A. Novo, N. Fariñas-Álvarez, J. Martínez-Sánchez, H. González-Jorge, and H. Lorenzo, “Automatic processing of aerial lidar data to detect vegetation continuity in the surroundings of roads,” Remote Sensing, vol. 12, no. 10, 2020. [Online]. Available: https://www.mdpi.com/2072-4292/12/10/1677
  6. W. Luo, S. Gan, X. Yuan, S. Gao, R. Bi, and L. Hu, “Test and analysis of vegetation coverage in open-pit phosphate mining area around dianchi lake using uav–vdvi,” Sensors, vol. 22, no. 17, 2022. [Online]. Available: https://www.mdpi.com/1424-8220/22/17/6388
  7. J. Jung and S.-H. Bae, “Real-time road lane detection in urban areas using lidar data,” Electronics, vol. 7, no. 11, 2018. [Online]. Available: https://www.mdpi.com/2079-9292/7/11/276
  8. K. Tan, W. Zhang, Z. Dong, X. Cheng, and X. Cheng, “Leaf and wood separation for individual trees using the intensity and density data of terrestrial laser scanners,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 8, pp. 7038–7050, 2021.
  9. W. Fang, X. Huang, F. Zhang, and D. Li, “Intensity correction of terrestrial laser scanning data by estimating laser transmission function,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 2, pp. 942–951, 2015.
  10. S. Kaasalainen, A. Kukko, T. Lindroos, P. Litkey, H. Kaartinen, J. Hyyppa, and E. Ahokas, “Brightness measurements and calibration with airborne and terrestrial laser scanners,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 2, pp. 528–534, 2008.
  11. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences,” in Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV), 2019.
  12. P. Jiang, P. Osteen, M. Wigness, and S. Saripalli, “Rellis-3d dataset: Data, benchmarks and analysis,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 1110–1116.
  13. A. V. Jelalian, “Laser radar systems,” in EASCON 1980; Electronics and Aerospace Systems Conference, Jan. 1980, pp. 546–554.
  14. G. Biavati, G. D. Donfrancesco, F. Cairo, and D. G. Feist, “Correction scheme for close-range lidar returns,” Appl. Opt., vol. 50, no. 30, pp. 5872–5882, Oct 2011. [Online]. Available: https://opg.optica.org/ao/abstract.cfm?URI=ao-50-30-5872
  15. L. Wang, D. Li, Y. Zhu, L. Tian, and Y. Shan, “Cross-dataset collaborative learning for semantic segmentation,” CoRR, vol. abs/2103.11351, 2021. [Online]. Available: https://arxiv.org/abs/2103.11351
  16. A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “Rangenet ++: Fast and accurate lidar semantic segmentation,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 4213–4220.
Citations (1)

Summary

We haven't generated a summary for this paper yet.