Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adaptive Denoising-Enhanced LiDAR Odometry for Degeneration Resilience in Diverse Terrains (2309.14641v2)

Published 26 Sep 2023 in cs.RO

Abstract: The flexibility of Simultaneous Localization and Mapping (SLAM) algorithms in various environments has consistently been a significant challenge. To address the issue of LiDAR odometry drift in high-noise settings, integrating clustering methods to filter out unstable features has become an effective module of SLAM frameworks. However, reducing the amount of point cloud data can lead to potential loss of information and possible degeneration. As a result, this research proposes a LiDAR odometry that can dynamically assess the point cloud's reliability. The algorithm aims to improve adaptability in diverse settings by selecting important feature points with sensitivity to the level of environmental degeneration. Firstly, a fast adaptive Euclidean clustering algorithm based on range image is proposed, which, combined with depth clustering, extracts the primary structural points of the environment defined as ambient skeleton points. Then, the environmental degeneration level is computed through the dense normal features of the skeleton points, and the point cloud cleaning is dynamically adjusted accordingly. The algorithm is validated on the KITTI benchmark and real environments, demonstrating higher accuracy and robustness in different environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. J. Zhang and S. Singh, “Low-drift and real-time lidar odometry and mapping,” Autonomous Robots, p. 401–416, Feb 2017. [Online]. Available: http://dx.doi.org/10.1007/s10514-016-9548-2
  2. J. Lin and F. Zhang, “Loam-livox:a fast,robust,high-precision lidar odometry and mapping package for lidars of small fov,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), May 2020. [Online]. Available: http://dx.doi.org/10.1109/icra40945.2020.9197440
  3. J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments,” in Robotics: Science and Systems XIV, Aug 2018. [Online]. Available: http://dx.doi.org/10.15607/rss.2018.xiv.016
  4. X. Chen, A. Milioto, E. Palazzolo, P. Giguere, J. Behley, and C. Stachniss, “Suma++: Efficient lidar-based semantic slam,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov 2019. [Online]. Available: http://dx.doi.org/10.1109/iros40897.2019.8967704
  5. T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2020. [Online]. Available: http://dx.doi.org/10.1109/iros45743.2020.9341176
  6. W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter.” IEEE Robotics and Automation Letters, p. 3317–3324, Apr 2021. [Online]. Available: http://dx.doi.org/10.1109/lra.2021.3064227
  7. W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry.” IEEE Transactions on Robotics, p. 2053–2073, Aug 2022. [Online]. Available: http://dx.doi.org/10.1109/tro.2022.3141876
  8. T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018. [Online]. Available: http://dx.doi.org/10.1109/iros.2018.8594299
  9. J.-E. Deschaud, “Imls-slam: Scan-to-model matching based on 3d data,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), May 2018. [Online]. Available: http://dx.doi.org/10.1109/icra.2018.8460653
  10. P. Zhou, X. Guo, X. Pei, and C. Chen, “T-loam: Truncated least squares lidar-only odometry and mapping in real time,” IEEE Transactions on Geoscience and Remote Sensing, p. 1–13, Jan 2022. [Online]. Available: http://dx.doi.org/10.1109/tgrs.2021.3083606
  11. D.-U. Seo, H. Lim, S. Lee, and H. Myung, “Pago-loam: Robust ground-optimized lidar odometry,” 2022.
  12. H. Li, B. Tian, H. Shen, and J. Lu, “An intensity-augmented lidar-inertial slam for solid-state lidars in degenerated environments,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–10, 2022.
  13. S. Guo, Z. Rong, S. Wang, and Y. Wu, “A lidar slam with pca-based feature extraction and two-stage matching,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022.
  14. O. Bengtsson and A.-J. Baerveldt, “Robot localization based on scan-matching—estimating the covariance matrix for the idc algorithm,” Robotics and Autonomous Systems, p. 29–40, Jul 2003. [Online]. Available: http://dx.doi.org/10.1016/s0921-8890(03)00008-3
  15. A. Censi, “An accurate closed-form estimate of icp’s covariance,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, Apr 2007. [Online]. Available: http://dx.doi.org/10.1109/robot.2007.363961
  16. J. Zhang and S. Singh, “Enabling aggressive motion estimation at low-drift and accurate mapping in real-time,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017. [Online]. Available: http://dx.doi.org/10.1109/icra.2017.7989589
  17. H. Gao, X. Zhang, Y. Fang, and J. Yuan, “A line segment extraction algorithm using laser data based on seeded region growing,” International Journal of Advanced Robotic Systems, p. 172988141875524, Jan 2018. [Online]. Available: http://dx.doi.org/10.1177/1729881418755245
  18. A. Schaefer, D. Buscher, L. Luft, and W. Burgard, “A maximum likelihood approach to extract polylines from 2-d laser range scans,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018. [Online]. Available: http://dx.doi.org/10.1109/iros.2018.8593844
  19. J. Meng, L. Wan, S. Wang, L. Jiang, G. Li, L. Wu, and Y. Xie, “Efficient and reliable lidar-based global localization of mobile robots using multiscale/resolution maps,” IEEE Transactions on Instrumentation and Measurement, vol. 70, p. 1–15, Jan 2021. [Online]. Available: http://dx.doi.org/10.1109/tim.2021.3093933
  20. H. Yu, W. Zhen, W. Yang, and S. Scherer, “Line-based 2-d–3-d registration and camera localization in structured environments,” IEEE Transactions on Instrumentation and Measurement, p. 8962–8972, Nov 2020. [Online]. Available: http://dx.doi.org/10.1109/tim.2020.2999137
  21. T. Ozaslan, G. Loianno, J. Keller, C. J. Taylor, V. Kumar, J. M. Wozencraft, and T. Hood, “Autonomous navigation and mapping for inspection of penstocks and tunnels with mavs,” IEEE Robotics and Automation Letters, p. 1740–1747, Jul 2017. [Online]. Available: http://dx.doi.org/10.1109/lra.2017.2699790
  22. W. Shi, S. Li, C. Yao, Q. Yan, C. Liu, and Q. Chen, “Dense normal based degeneration-aware 2-d lidar odometry for correlative scan matching,” IEEE Transactions on Instrumentation and Measurement, p. 1–16, Jan 2023. [Online]. Available: http://dx.doi.org/10.1109/tim.2022.3231326
  23. H. Lim, D. Kim, B. Kim, and H. Myung, “Adalio: Robust adaptive lidar-inertial odometry in degenerate indoor environments,” 2023.
  24. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, p. 1231–1237, Sep 2013. [Online]. Available: http://dx.doi.org/10.1177/0278364913491297
  25. J. Lin, C. Zheng, W. Xu, and F. Zhang, “R2live: A robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping,” IEEE Robotics and Automation Letters, p. 7469–7476, Oct 2021. [Online]. Available: http://dx.doi.org/10.1109/lra.2021.3095515
  26. J. Lin and F. Zhang, “R3live: A robust, real-time, rgb-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package,” in 2022 International Conference on Robotics and Automation (ICRA), May 2022. [Online]. Available: http://dx.doi.org/10.1109/icra46639.2022.9811935
  27. K. Ebadi, M. Palieri, S. Wood, C. Padgett, and A.-a. Agha-mohammadi, “Dare-slam: Degeneracy-aware and resilient loop closing in perceptually-degraded environments,” arXiv: Robotics,arXiv: Robotics, Feb 2021.
  28. M. Pierzchała, P. Giguère, and R. Astrup, “Mapping forests using an unmanned ground vehicle with 3d lidar and graph-slam,” Computers and Electronics in Agriculture, p. 217–225, Feb 2018. [Online]. Available: http://dx.doi.org/10.1016/j.compag.2017.12.034
  29. M. Li, H. Zhu, S. You, L. Wang, and C. Tang, “Efficient laser-based 3d slam for coal mine rescue robots,” IEEE Access, p. 14124–14138, Jan 2019. [Online]. Available: http://dx.doi.org/10.1109/access.2018.2889304
  30. X. Wei, J. Lv, J. Sun, E. Dong, and S. Pu, “Gclo: Ground constrained lidar odometry with low-drifts for gps-denied indoor environments,” in 2022 International Conference on Robotics and Automation (ICRA), May 2022. [Online]. Available: http://dx.doi.org/10.1109/icra46639.2022.9812336
  31. B. Wu, A. Wan, X. Yue, and K. Keutzer, “Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), May 2018. [Online]. Available: http://dx.doi.org/10.1109/icra.2018.8462926
  32. Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. [Online]. Available: http://dx.doi.org/10.1109/cvpr.2018.00472
  33. R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. [Online]. Available: http://dx.doi.org/10.1109/cvpr.2017.16
  34. A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “Rangenet ++: Fast and accurate lidar semantic segmentation,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov 2019. [Online]. Available: http://dx.doi.org/10.1109/iros40897.2019.8967762
  35. R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KI - Künstliche Intelligenz, p. 345–348, Nov 2010. [Online]. Available: http://dx.doi.org/10.1007/s13218-010-0059-6
  36. S. Kato, S. Tokunaga, Y. Maruyama, S. Maeda, M. Hirabayashi, Y. Kitsukawa, A. Monrroy, T. Ando, Y. Fujii, and T. Azumi, “Autoware on board: enabling autonomous vehicles with embedded systems,” in 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), Apr 2018. [Online]. Available: http://dx.doi.org/10.1109/iccps.2018.00035
  37. F. Hasecke, L. Hahn, and A. Kummert, “Flic: Fast lidar image clustering,” arXiv: Computer Vision and Pattern Recognition,arXiv: Computer Vision and Pattern Recognition, Mar 2020.
  38. Z. Yan, T. Duckett, and N. Bellotto, “Online learning for 3d lidar-based human detection: experimental analysis of point cloud clustering and classification methods,” Autonomous Robots, p. 147–164, Jan 2020. [Online]. Available: http://dx.doi.org/10.1007/s10514-019-09883-y
  39. M. Ester, H.-P. Kriegel, J. Sander, X. Xu et al., “A density-based algorithm for discovering clusters in large spatial databases with noise,” in kdd, vol. 96, no. 34, 1996, pp. 226–231.
  40. J. Papon, A. Abramov, M. Schoeler, and F. Worgotter, “Voxel cloud connectivity segmentation - supervoxels for point clouds,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun 2013. [Online]. Available: http://dx.doi.org/10.1109/cvpr.2013.264
  41. S. Park, S. Wang, H. Lim, and U. Kang, “Curved-voxel clustering for accurate segmentation of 3d lidar point clouds with real-time performance,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov 2019. [Online]. Available: http://dx.doi.org/10.1109/iros40897.2019.8968026
  42. D. Zermas, I. Izzat, and N. Papanikolopoulos, “Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017. [Online]. Available: http://dx.doi.org/10.1109/icra.2017.7989591
  43. I. Bogoslavskyi and C. Stachniss, “Fast range image-based segmentation of sparse 3d laser scans for online operation,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2016. [Online]. Available: http://dx.doi.org/10.1109/iros.2016.7759050
  44. Z. Chen, H. Xu, J. Zhao, and H. Liu, “Fast-spherical-projection-based point cloud clustering algorithm,” Transportation Research Record: Journal of the Transportation Research Board, p. 315–329, Jun 2022. [Online]. Available: http://dx.doi.org/10.1177/03611981221074365
  45. Y. Zhao, X. Zhang, and X. Huang, “A technical survey and evaluation of traditional point cloud clustering methods for lidar panoptic segmentation,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Oct 2021. [Online]. Available: http://dx.doi.org/10.1109/iccvw54120.2021.00279
  46. T. Yang, Y. Li, C. Zhao, D. Yao, G. Chen, L. Sun, T. Krajnik, and Z. Yan, “3d tof lidar in mobile robotics: A review,” 2022.
  47. J. Kang and N. L. Doh, “Full-dof calibration of a rotating 2-d lidar with a simple plane measurement,” IEEE Transactions on Robotics, p. 1245–1263, Oct 2016. [Online]. Available: http://dx.doi.org/10.1109/tro.2016.2596769
  48. C. Park, S. Kim, P. Moghadam, C. Fookes, and S. Sridharan, “Probabilistic surfel fusion for dense lidar mapping,” in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Oct 2017. [Online]. Available: http://dx.doi.org/10.1109/iccvw.2017.285
  49. M. Himmelsbach, F. v. Hundelshausen, and H.-J. Wuensche, “Fast segmentation of 3d point clouds for ground vehicles,” in 2010 IEEE Intelligent Vehicles Symposium, Jun 2010. [Online]. Available: http://dx.doi.org/10.1109/ivs.2010.5548059
Citations (3)

Summary

We haven't generated a summary for this paper yet.