Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation (2309.10436v1)

Published 19 Sep 2023 in cs.RO

Abstract: Keypoint detection and description play a pivotal role in various robotics and autonomous applications including visual odometry (VO), visual navigation, and Simultaneous localization and mapping (SLAM). While a myriad of keypoint detectors and descriptors have been extensively studied in conventional camera images, the effectiveness of these techniques in the context of LiDAR-generated images, i.e. reflectivity and ranges images, has not been assessed. These images have gained attention due to their resilience in adverse conditions such as rain or fog. Additionally, they contain significant textural information that supplements the geometric information provided by LiDAR point clouds in the point cloud registration phase, especially when reliant solely on LiDAR sensors. This addresses the challenge of drift encountered in LiDAR Odometry (LO) within geometrically identical scenarios or where not all the raw point cloud is informative and may even be misleading. This paper aims to analyze the applicability of conventional image key point extractors and descriptors on LiDAR-generated images via a comprehensive quantitative investigation. Moreover, we propose a novel approach to enhance the robustness and reliability of LO. After extracting key points, we proceed to downsample the point cloud, subsequently integrating it into the point cloud registration phase for the purpose of odometry estimation. Our experiment demonstrates that the proposed approach has comparable accuracy but reduced computational overhead, higher odometry publishing rate, and even superior performance in scenarios prone to drift by using the raw point cloud. This, in turn, lays a foundation for subsequent investigations into the integration of LiDAR-generated images with LO. Our code is available on GitHub: https://github.com/TIERS/ws-lidar-as-camera-odom.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Multi-modal lidar dataset for benchmarking general-purpose localization and mapping algorithms. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3837–3844, 2022.
  2. A benchmark for multi-modal lidar slam with ground truth in gnss-denied environments. Remote Sensing, 15(13):3314, 2023.
  3. A combined corner and edge detector. In Alvey vision conference, volume 15, pages 10–5244. Citeseer, 1988.
  4. Hans Moravec. Obstacle avoidance and navigation in the real world by a seeing robot rover. Technical report, Carnegie Mellon University, Pittsburgh, PA, September 1980.
  5. A comparative study between moravec and harris corner detection of noisy images using adaptive wavelet thresholding technique. arXiv preprint arXiv:1209.1558, 2012.
  6. Jianbo Shi and Tomasi. Good features to track. In 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 593–600, 1994.
  7. Machine learning for high-speed corner detection. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, pages 430–443. Springer, 2006.
  8. Brief: Binary robust independent elementary features. In Kostas Daniilidis, Petros Maragos, and Nikos Paragios, editors, Computer Vision – ECCV 2010, pages 778–792, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
  9. Freak: Fast retina keypoint. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 510–517, 2012.
  10. David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60:91–110, 2004.
  11. D.G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, volume 2, pages 1150–1157 vol.2, 1999.
  12. Surf: Speeded up robust features. In Aleš Leonardis, Horst Bischof, and Axel Pinz, editors, Computer Vision – ECCV 2006, pages 404–417, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
  13. Brisk: Binary robust invariant scalable keypoints. In 2011 International Conference on Computer Vision, pages 2548–2555, 2011.
  14. Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision, pages 2564–2571. Ieee, 2011.
  15. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In British Machine Vision Conference, 2013.
  16. Local difference binary for ultrafast and distinctive feature description. IEEE transactions on pattern analysis and machine intelligence, 36:188–194, 01 2014.
  17. Deep learning in computer vision: principles and applications. CRC Press, 2020.
  18. Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
  19. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018.
  20. Detailed analysis on generating the range image for lidar point cloud processing. Electronics, 10(11):1224, 2021.
  21. Point Cloud Library. How to create a range image from a point cloud. Accessed on September 13, 2023.
  22. Lidar-as-camera for end-to-end driving. Sensors, 23(5), 2023.
  23. Angus Pacala. Lidar as a camera – digital lidar’s implications for computer vision. Ouster Blog, 2018.
  24. Lidar imaging-based attentive perception. In 2020 International Conference on Unmanned Aircraft Systems (ICUAS), pages 622–626, 2020.
  25. Uav tracking with lidar as a camera sensor in gnss-denied environments. In 2023 International Conference on Localization and GNSS (ICL-GNSS), pages 1–7. IEEE, 2023.
  26. General-purpose deep learning detection and segmentation models for images from a lidar-based camera sensor. Sensors, 23(6):2936, 2023.
  27. Range image-based lidar localization for autonomous vehicles. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 5802–5808, 2021.
  28. Comparative evaluation of binary features. In European Conference on Computer Vision, 2012.
  29. A comparison of affine region detectors. International Journal of Computer Vision, 65:43–72, 11 2005.
  30. Shaharyar Ahmed Khan Tareen and Zahra Saleem. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pages 1–10, 2018.
  31. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.
  32. Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In CVPR, 2017.
  33. A comparative experimental study of image feature detectors and descriptors. Machine Vision and Applications, 26:443–466, 05 2015.
  34. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008.
  35. On the comparison of classic and deep keypoint detector and descriptor methods. pages 64–69, 09 2019.
  36. A survey of image labelling for computer vision applications. Journal of Business Analytics, 4(2):91–110, 2021.
  37. Reducing the pain: A novel tool for efficient ground-truth labelling in images. In 2018 International Conference on Image and Vision Computing New Zealand (IVCNZ), pages 1–9, 2018.
  38. Lightn: Light-weight transformer network for performance-overhead tradeoff in point cloud downsampling. arXiv preprint arXiv:2202.06263, 2022.
  39. Point cloud reduction and denoising based on optimized downsampling and bilateral filtering. Ieee Access, 8:136316–136326, 2020.
  40. Ji Zhang and Sanjiv Singh. Loam: Lidar odometry and mapping in real-time. In Robotics: Science and systems, volume 2, pages 1–9. Berkeley, CA, 2014.
  41. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4758–4765. IEEE, 2018.
  42. F-loam: Fast lidar odometry and mapping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4390–4396. IEEE, 2021.
  43. Method for registration of 3-d shapes. In Sensor fusion IV: control paradigms and data structures, volume 1611, pages 586–606. Spie, 1992.
  44. Voxelized gicp for fast and accurate 3d point cloud registration. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 11054–11059. IEEE, 2021.
  45. Ct-icp: Real-time elastic lidar odometry with loop closure. In 2022 International Conference on Robotics and Automation (ICRA), pages 5580–5586. IEEE, 2022.
  46. Kiss-icp: In defense of point-to-point icp–simple, accurate, and robust registration if done the right way. IEEE Robotics and Automation Letters, 8(2):1029–1036, 2023.
  47. The normal distributions transform: A new approach to laser scan matching. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), volume 3, pages 2743–2748. IEEE, 2003.
Citations (4)

Summary

We haven't generated a summary for this paper yet.