Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hilti SLAM Challenge 2023: Benchmarking Single + Multi-session SLAM across Sensor Constellations in Construction (2404.09765v2)

Published 15 Apr 2024 in cs.RO and eess.IV

Abstract: Simultaneous Localization and Mapping systems are a key enabler for positioning in both handheld and robotic applications. The Hilti SLAM Challenges organized over the past years have been successful at benchmarking some of the world's best SLAM Systems with high accuracy. However, more capabilities of these systems are yet to be explored, such as platform agnosticism across varying sensor suites and multi-session SLAM. These factors indirectly serve as an indicator of robustness and ease of deployment in real-world applications. There exists no dataset plus benchmark combination publicly available, which considers these factors combined. The Hilti SLAM Challenge 2023 Dataset and Benchmark addresses this issue. Additionally, we propose a novel fiducial marker design for a pre-surveyed point on the ground to be observable from an off-the-shelf LiDAR mounted on a robot, and an algorithm to estimate its position at mm-level accuracy. Results from the challenge show an increase in overall participation, single-session SLAM systems getting increasingly accurate, successfully operating across varying sensor suites, but relatively few participants performing multi-session SLAM. Dataset URL: https://www.hilti-challenge.com/dataset-2023.html

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. T. H. Chung, V. Orekhov, and A. Maio, “Into the robotic depths: Analysis and insights from the DARPA Subterranean Challenge,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 6, no. 1, pp. 477–502, 2023.
  2. L. Zhang, M. Helmberger, L. F. T. Fu, D. Wisth, M. Camurri, D. Scaramuzza, and M. Fallon, “Hilti-oxford dataset: A millimeter-accurate benchmark for simultaneous localization and mapping,” IEEE Robotics and Automation Letters, vol. 8, p. 408–415, Jan. 2023.
  3. M. Grupp, “evo: Python package for the evaluation of odometry and slam..” https://github.com/MichaelGrupp/evo, 2017.
  4. Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual(-inertial) odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7244–7251, 2018.
  5. M. Helmberger, K. Morin, B. Berner, N. Kumar, G. Cioffi, and D. Scaramuzza, “The hilti slam challenge dataset,” IEEE Robotics and Automation Letters, vol. 7, p. 7518–7525, July 2022.
  6. S. Z. et al., “Subt-mrs: A subterranean, multi-robot, multi-spectral and multi-degraded dataset for robust slam,” 2023.
  7. W. W. et al., “Tartanair: A dataset to push the limits of visual slam,” 2020.
  8. M. T. et al., “Conslam: Periodically collected real-world construction dataset for slam and progress monitoring,” in Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, pp. 317–331, Springer, 2023.
  9. M. Trzeciak, K. Pluta, Y. Fathy, L. Alcalde, S. Chee, A. Bromley, I. Brilakis, and P. Alliez, “Conslam: Construction data set for slam,” Journal of Computing in Civil Engineering, vol. 37, no. 3, p. 04023009, 2023.
  10. D. LEE, M. Jung, and A. Kim, “ConPR: Ongoing construction site dataset for place recognition,” in IROS 2023 Workshop on Closing the Loop on Localization: What Are We Localizing For, and How Does That Shape Everything We Should Do?, 2023.
  11. W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry,” 2021.
  12. W. Gao, “imu_utils.” https://github.com/gaowenliang/imu_utils, 2018.
  13. O. J. Woodman, “An introduction to inertial navigation,” Tech. Rep. UCAM-CL-TR-696, University of Cambridge, Computer Laboratory, Aug. 2007.
  14. P. Furgale, J. Rehder, and R. Siegwart, “Unified temporal and spatial calibration for multi-sensor systems,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1280–1286, 2013.
  15. X. Zhi, J. Hou, Y. Lu, L. Kneip, and S. Schwertfeger, “Multical: Spatiotemporal calibration for multiple imus, cameras and lidars,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2446–2453, 2022.
  16. J.-K. Huang, S. Wang, M. Ghaffari, and J. W. Grizzle, “Lidartag: A real-time fiducial tag system for point clouds,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 4875–4882, 2021.
  17. W. Kabsch, “A solution for the best rotation to relate two sets of vectors,” Acta Crystallographica Section A, vol. 32, pp. 922–923, Sep 1976.
  18. J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou, “Retinaface: Single-stage dense face localisation in the wild,” 2019.
  19. Ultralytics, “YOLOv5: A state-of-the-art real-time object detection system.” https://docs.ultralytics.com, 2021.
  20. H. Lim, D. Kim, B. Kim, and H. Myung, “Adalio: Robust adaptive lidar-inertial odometry in degenerate indoor environments,” 2023.
  21. H. Lim, S. Yeon, S. Ryu, Y. Lee, Y. Kim, J. Yun, E. Jung, D. Lee, and H. Myung, “A single correspondence is enough: Robust global registration to avoid degeneracy in urban environments,” 2022.
  22. A. Segal, D. Haehnel, and S. Thrun, “Generalized-icp,” in Proceedings of Robotics: Science and Systems, (Seattle, USA), June 2009.
  23. F. D. et al., “borglab/gtsam: 4.2a8,” Nov. 2022.
  24. C. Yuan, W. xu, X. Liu, X. Hong, and F. Zhang, “Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry,” 2022.
  25. Z. Liu, X. Liu, and F. Zhang, “Efficient and consistent bundle adjustment on lidar point clouds,” 2022.
  26. X. Liu, Z. Liu, F. Kong, and F. Zhang, “Large-scale lidar consistent mapping using hierarchical lidar bundle adjustment,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1523–1530, 2023.
  27. D. He, W. Xu, N. Chen, F. Kong, C. Yuan, and F. Zhang, “Point-lio: Robust high-bandwidth light detection and ranging inertial odometry,” Advanced Intelligent Systems, vol. 5, no. 7, p. 2200459, 2023.
  28. Z. Zhang, Z. Yao, and M. Lu, “Ft-lvio: Fully tightly coupled lidar-visual-inertial odometry,” IET Radar, Sonar & Navigation, vol. 17, no. 5, pp. 759–771, 2023.
  29. K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Voxelized gicp for fast and accurate 3d point cloud registration,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 11054–11059, 2021.
  30. P. Dellenbach, J.-E. Deschaud, B. Jacquet, and F. Goulette, “Ct-icp: Real-time elastic lidar odometry with loop closure,” 2021.
  31. Y. Wang, Y. Ng, I. Sa, A. Parra, C. Rodriguez, T. J. Lin, and H. Li, “Mavis: Multi-camera augmented visual-inertial slam using se2(3) based exact imu pre-integration,” 2023.
  32. H. Lim, J. Jeon, and H. Myung, “Uv-slam: Unconstrained line-based slam using vanishing points for structural mapping,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1518–1525, 2022.
  33. S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual-inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314 – 334, 2015. Published online before print 15 December 2014.
  34. A. C. et al., “maplab 2.0 – A Modular and Multi-Modal Mapping Framework,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 520–527, 2023.
  35. D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” 2018.
  36. P.-E. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich, “SuperGlue: Learning feature matching with graph neural networks,” in CVPR, 2020.
  37. Y. A. Malkov and D. A. Yashunin, “Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs,” arXiv preprint arXiv:1603.09320, 2016.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ashish Devadas Nair (1 paper)
  2. Julien Kindle (3 papers)
  3. Plamen Levchev (1 paper)
  4. Davide Scaramuzza (190 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com