Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mesh-LOAM: Real-time Mesh-Based LiDAR Odometry and Mapping (2312.15630v1)

Published 25 Dec 2023 in cs.RO

Abstract: Despite having achieved real-time performance in mesh construction, most of the current LiDAR odometry and meshing methods may struggle to deal with complex scenes due to relying on explicit meshing schemes. They are usually sensitive to noise. To overcome these limitations, we propose a real-time mesh-based LiDAR odometry and mapping approach for large-scale scenes via implicit reconstruction and a parallel spatial-hashing scheme. To efficiently reconstruct triangular meshes, we suggest an incremental voxel meshing method that updates every scan by traversing each point once and compresses space via a scalable partition module. By taking advantage of rapid accessing triangular meshes at any time, we design point-to-mesh odometry with location and feature-based data association to estimate the poses between the incoming point clouds and the recovered triangular meshes. The experimental results on four datasets demonstrate the effectiveness of our proposed approach in generating accurate motion trajectories and environmental mesh maps.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. A. Chalvatzaras, I. Pratikakis, and A. A. Amanatiadis, “A survey on map-based localization techniques for autonomous vehicles,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 2, pp. 1574–1596, 2022.
  2. G. Bresson, Z. Alsayed, L. Yu, and S. Glaser, “Simultaneous localization and mapping: A survey of current trends in autonomous driving,” IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 194–220, 2017.
  3. J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in Robotics: Science and systems, vol. 2, no. 9.   Berkeley, CA, 2014, pp. 1–9.
  4. H. Wang, C. Wang, C.-L. Chen, and L. Xie, “F-loam: Fast lidar odometry and mapping,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 4390–4396.
  5. I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, “Kiss-icp: In defense of point-to-point icp–simple, accurate, and robust registration if done the right way,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 1029–1036, 2023.
  6. A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” Autonomous robots, vol. 34, pp. 189–206, 2013.
  7. Z. Zhou, X. Feng, S. Di, and X. Zhou, “A lidar mapping system for robot navigation in dynamic environments,” IEEE Transactions on Intelligent Vehicles, 2023.
  8. J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments.” in Robotics: Science and Systems, vol. 2018, 2018, p. 59.
  9. X. Chen, A. Milioto, E. Palazzolo, P. Giguere, J. Behley, and C. Stachniss, “Suma++: Efficient lidar-based semantic slam,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 4530–4537.
  10. E. Piazza, A. Romanoni, and M. Matteucci, “Real-time cpu-based large-scale three-dimensional mesh reconstruction,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1584–1591, 2018.
  11. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE conference on computer vision and pattern recognition, 2012, pp. 3354–3361.
  12. T. Kühner and J. Kümmerle, “Large-scale volumetric scene reconstruction using lidar,” in 2020 IEEE international conference on robotics and automation, 2020, pp. 6261–6267.
  13. I. Vizzo, X. Chen, N. Chebrolu, J. Behley, and C. Stachniss, “Poisson surface reconstruction for lidar odometry and mapping,” in 2021 IEEE International Conference on Robotics and Automation, 2021.
  14. J. Ruan, B. Li, Y. Wang, and Y. Sun, “Slamesh: Real-time lidar simultaneous localization and meshing,” in IEEE International Conference on Robotics and Automation, ICRA 2023, London, UK, May 29 - June 2, 2023.   IEEE, 2023, pp. 3546–3552.
  15. T. Schöps, T. Sattler, and M. Pollefeys, “Surfelmeshing: Online surfel-based mesh reconstruction,” IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2494–2507, 2019.
  16. J. Lin, C. Yuan, Y. Cai, H. Li, Y. Ren, Y. Zou, X. Hong, and F. Zhang, “Immesh: An immediate lidar localization and meshing framework,” IEEE Transactions on Robotics, 2023.
  17. J.-E. Deschaud, “Imls-slam: Scan-to-model matching based on 3d data,” in 2018 IEEE International Conference on Robotics and Automation, 2018.
  18. X. Zheng and J. Zhu, “Efficient lidar odometry for autonomous driving,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 8458–8465, 2021.
  19. B. Zhou, Y. Tu, Z. Jin, C. Xu, and H. Kong, “Hpplo-net: Unsupervised lidar odometry using a hierarchical point-to-plane solver,” IEEE Transactions on Intelligent Vehicles, 2023.
  20. J. Ruan, B. Li, Y. Wang, and Z. Fang, “Gp-slam+: real-time 3d lidar slam based on improved regionalized gaussian process map reconstruction,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 5171–5178.
  21. J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans, “Reconstruction and representation of 3d objects with radial basis functions,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001, pp. 67–76.
  22. R. Kolluri, “Provably good moving least squares,” ACM Transactions on Algorithms, vol. 4, no. 2, pp. 1–25, 2008.
  23. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proceedings of the 19th annual conference on computer graphics and interactive techniques, 1992, pp. 71–78.
  24. J. Ortiz, A. Clegg, J. Dong, E. Sucar, D. Novotny, M. Zollhoefer, and M. Mukadam, “isdf: Real-time neural signed distance fields for robot perception,” in Robotics: Science and Systems, 2022.
  25. B. Curless and M. Levoy, “A volumetric method for building complex models from range images,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996.
  26. I. Vizzo, T. Guadagnino, J. Behley, and C. Stachniss, “Vdbfusion: Flexible and efficient tsdf integration of range sensor data,” Sensors, vol. 22, no. 3, p. 1296, 2022.
  27. S. Fuhrmann and M. Goesele, “Fusion of depth maps with multiple scales,” ACM Transactions on Graphics (TOG), vol. 30, 2011.
  28. X. Zhong, Y. Pan, J. Behley, and C. Stachniss, “Shine-mapping: Large-scale 3d mapping using sparse hierarchical implicit neural representations,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2023.
  29. J. Deng, Q. Wu, X. Chen, S. Xia, Z. Sun, G. Liu, W. Yu, and L. Pei, “Nerf-loam: Neural implicit representation for large-scale incremental lidar odometry and mapping,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8218–8227.
  30. H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto, “Voxblox: Incremental 3d euclidean signed distance fields for on-board mav planning,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 1366–1373.
  31. M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3d reconstruction at scale using voxel hashing,” ACM Transactions on Graphics, vol. 32, no. 6, pp. 1–11, 2013.
  32. J. Niedzwiedzki, P. Lipinski, and L. Podsedkowski, “Idtmm: Incremental direct triangle mesh mapping,” IEEE Robotics and Automation Letters, vol. 8, no. 9, pp. 5416–5423, 2023.
  33. J. Solà Ortega, J. Deray, and D. Atchuthan, “A micro lie theory for state estimation in robotics,” 2018.
  34. W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” ACM siggraph computer graphics, vol. 21, no. 4, pp. 163–169, 1987.
  35. M. Teschner, B. Heidelberger, M. Müller, D. Pomerantes, and M. H. Gross, “Optimized spatial hashing for collision detection of deformable objects.” in Vmv, 2003, pp. 47–54.
  36. C. Artho, K. Havelund, and A. Biere, “High-level data races,” Software Testing, Verification and Reliability, vol. 13, pp. 207–227, 2003.
  37. P. Celis, P.-A. Larson, and J. I. Munro, “Robin hood hashing,” in 26th Annual Symposium on Foundations of Computer Science (sfcs 1985), 1985, pp. 281–288.
  38. M. Yokozuka, K. Koide, S. Oishi, and A. Banno, “Litamin2: Ultra light lidar-based slam using geometric approximation applied with kl-divergence,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 11 619–11 625.
  39. M. Helmberger, K. Morin, B. Berner, N. Kumar, G. Cioffi, and D. Scaramuzza, “The hilti slam challenge dataset,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7518–7525, 2022.
  40. M. Ramezani, Y. Wang, M. Camurri, D. Wisth, M. Mattamala, and M. Fallon, “The newer college dataset: Handheld lidar, inertial and vision with ground truth,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 4353–4360.
  41. L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4460–4470.
  42. P. Flajolet, P. Poblete, and A. Viola, “On the analysis of linear probing hashing,” Algorithmica, vol. 22, no. 4, pp. 490–515, 1998.
Citations (2)

Summary

  • The paper introduces Mesh-LOAM, a novel method that uses spatial hashing and implicit mesh reconstruction for real-time LiDAR odometry and mapping.
  • It details a point-to-mesh registration algorithm that minimizes pose drift by accurately estimating sensor locations in complex environments.
  • Experimental validation on the KITTI dataset demonstrates accurate motion trajectories and rich environmental meshes at around 54 frames per second.

Introduction to LiDAR-Based Mapping and Localization

LiDAR odometry and mapping (LOAM) has become a crucial technology in the robotic industry, particularly for enabling autonomous vehicles to understand their surroundings. Essentially, LOAM works by using data from a LiDAR sensor to continually estimate the position of the sensor (odometry) and construct a detailed map of the environment (mapping). Despite advancements, creating accurate maps that are also efficient for real-time applications has remained a challenge, especially in complex scenes.

Enhancing LiDAR Odometry and Mapping

A recent innovation in this field is the development of a new approach designed to improve the accuracy of LiDAR maps while simultaneously reducing pose estimation drift. Traditional methods tend to struggle with complex scenes, as they rely on explicit triangulation techniques sensitive to noise and structural complexity. The newly developed Mesh-LOAM aims to address these challenges.

Mesh-LOAM integrates an efficient and parallelizable spatial-hashing scheme with a novel implicit reconstruction method. This method efficiently reconstructs triangular meshes, which serve as the foundation for creating the environment's continuous surface map. Not only does this provide a comprehensive representation for robotic navigation, but it also achieves this performance in real-time, which is a significant step up from existing techniques.

Point-to-Mesh Odometry

The described approach introduces a point-to-mesh registration algorithm, which is responsible for estimating the sensor poses through a comparison with the reconstructed mesh map. This particular step is key to ensuring that the odometry estimates are accurate and reliable even in dynamically complex environments.

Experimental Validation

The effectiveness and real-world applicability of Mesh-LOAM were put to the test on several datasets, including the KITTI dataset, renowned for its diverse urban, residential, and highway scenes. Through these tests, Mesh-LOAM effectively demonstrated its capability to generate highly accurate motion trajectories and rich environmental meshes, managing to run at approximately 54 frames per second.

Conclusion

The significance of this development cannot be overstated for applications that rely on rapid and accurate environmental mapping. Autonomous vehicles, and robotics in general, require robust systems that can handle the intricate and variable nature of the real world. With promising results across various tests, Mesh-LOAM presents itself as a potential benchmark for future advances in LOAM technology.