Papers
Topics
Authors
Recent
Search
2000 character limit reached

G3Reg: Pyramid Graph-based Global Registration using Gaussian Ellipsoid Model

Published 22 Aug 2023 in cs.CV and cs.RO | (2308.11573v2)

Abstract: This study introduces a novel framework, G3Reg, for fast and robust global registration of LiDAR point clouds. In contrast to conventional complex keypoints and descriptors, we extract fundamental geometric primitives, including planes, clusters, and lines (PCL) from the raw point cloud to obtain low-level semantic segments. Each segment is represented as a unified Gaussian Ellipsoid Model (GEM), using a probability ellipsoid to ensure the ground truth centers are encompassed with a certain degree of probability. Utilizing these GEMs, we present a distrust-and-verify scheme based on a Pyramid Compatibility Graph for Global Registration (PAGOR). Specifically, we establish an upper bound, which can be traversed based on the confidence level for compatibility testing to construct the pyramid graph. Then, we solve multiple maximum cliques (MAC) for each level of the pyramid graph, thus generating the corresponding transformation candidates. In the verification phase, we adopt a precise and efficient metric for point cloud alignment quality, founded on geometric primitives, to identify the optimal candidate. The algorithm's performance is validated on three publicly available datasets and a self-collected multi-session dataset. Parameter settings remained unchanged during the experiment evaluations. The results exhibit superior robustness and real-time performance of the G3Reg framework compared to state-of-the-art methods. Furthermore, we demonstrate the potential for integrating individual GEM and PAGOR components into other registration frameworks to enhance their efficacy. Code: https://github.com/HKUST-Aerial-Robotics/G3Reg

Definition Search Book Streamline Icon: https://streamlinehq.com
References (72)
  1. P. Yin, S. Yuan, H. Cao, X. Ji, S. Zhang, and L. Xie, “Segregator: Global point cloud registration with semantic and geometric cues,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 2848–2854.
  2. Z. Liu, S. Zhou, C. Suo, P. Yin, W. Chen, H. Wang, H. Li, and Y.-H. Liu, “Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2831–2840.
  3. G. Pramatarov, D. De Martini, M. Gadd, and P. Newman, “Boxgraph: Semantic place recognition and pose estimation from 3d lidar,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 7004–7011.
  4. R. Dubé, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena, “Segmatch: Segment based place recognition in 3d point clouds,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 5266–5272.
  5. P. Yin, S. Zhao, H. Lai, R. Ge, J. Zhang, H. Choset, and S. Scherer, “Automerge: A framework for map assembling and smoothing in city-scale environments,” IEEE Transactions on Robotics, 2023.
  6. Z. Yu, Z. Qiao, L. Qiu, H. Yin, and S. Shen, “Multi-session, localization-oriented and lightweight lidar mapping using semantic lines and planes,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 7210–7217.
  7. H. Yin, X. Xu, S. Lu, X. Chen, R. Xiong, S. Shen, C. Stachniss, and Y. Wang, “A survey on global lidar localization: Challenges, advances and open problems,” arXiv preprint arXiv:2302.07433, 2023.
  8. X. Xu, S. Lu, J. Wu, H. Lu, Q. Zhu, Y. Liao, R. Xiong, and Y. Wang, “Ring++: Roto-translation-invariant gram for global localization on a sparse scan map,” IEEE Transactions on Robotics, 2023.
  9. R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (fpfh) for 3d registration,” in 2009 IEEE international conference on robotics and automation.   IEEE, 2009, pp. 3212–3217.
  10. Z. J. Yew and G. H. Lee, “Rpm-net: Robust point matching using learned features,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 11 824–11 833.
  11. S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo, “Spinnet: Learning a general surface descriptor for 3d point cloud registration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 11 753–11 762.
  12. F. Poiesi and D. Boscaini, “Learning general and distinctive 3d local deep descriptors for point cloud registration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  13. H. Yang, J. Shi, and L. Carlone, “Teaser: Fast and certifiable point cloud registration,” IEEE Transactions on Robotics, vol. 37, no. 2, pp. 314–333, 2020.
  14. P. C. Lusk, K. Fathian, and J. P. How, “Clipper: A graph-theoretic framework for robust data association,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 828–13 834.
  15. J. Shi, H. Yang, and L. Carlone, “Robin: a graph-theoretic approach to reject outliers in robust estimation using invariants,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 820–13 827.
  16. H. Lim, S. Yeon, S. Ryu, Y. Lee, Y. Kim, J. Yun, E. Jung, D. Lee, and H. Myung, “A single correspondence is enough: Robust global registration to avoid degeneracy in urban environments,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 8010–8017.
  17. X. Zhang, J. Yang, S. Zhang, and Y. Zhang, “3d registration with maximal cliques,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17 745–17 754.
  18. Z. Qiao, Z. Yu, H. Yin, and S. Shen, “Pyramid semantic graph-based global point cloud registration with low overlap,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 11 202–11 209.
  19. L. Bernreiter, L. Ott, J. Nieto, R. Siegwart, and C. Cadena, “Phaser: A robust and correspondence-free global pointcloud registration,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 855–862, 2021.
  20. J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-icp: A globally optimal solution to 3d icp point-set registration,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 11, pp. 2241–2254, 2015.
  21. X. Li, J. K. Pontes, and S. Lucey, “Pointnetlk revisited,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 763–12 772.
  22. Y. Aoki, H. Goforth, R. A. Srivatsan, and S. Lucey, “Pointnetlk: Robust & efficient point cloud registration using pointnet,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 7163–7172.
  23. Y. Zhong, “Intrinsic shape signatures: A shape descriptor for 3d object recognition,” in 2009 IEEE 12th international conference on computer vision workshops, ICCV Workshops.   IEEE, 2009, pp. 689–696.
  24. A. Mian, M. Bennamoun, and R. Owens, “On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes,” International Journal of Computer Vision, vol. 89, pp. 348–361, 2010.
  25. X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai, “D3feat: Joint learning of dense detection and description of 3d local features,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6359–6367.
  26. K. Liu, K. Ok, and N. Roy, “Volumon: Weakly-supervised volumetric monocular estimation with ellipsoid representations,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 5686–5693.
  27. L. Nicholson, M. Milford, and N. Sünderhauf, “Quadricslam: Con-strained dual quadrics from object detections as landmarks in semantic slam,” in IEEE Robotics and Automation Letters (RA-L), 2018.
  28. F. Tombari, S. Salti, and L. Di Stefano, “Unique signatures of histograms for local surface description,” in Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part III 11.   Springer, 2010, pp. 356–369.
  29. H. Wang, Y. Liu, Q. Hu, B. Wang, J. Chen, Z. Dong, Y. Guo, W. Wang, and B. Yang, “Roreg: Pairwise point cloud registration with oriented descriptors and local rotations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  30. H. Yang, P. Antonante, V. Tzoumas, and L. Carlone, “Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1127–1134, 2020.
  31. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, pp. 91–110, 2004.
  32. A. P. Bustos and T.-J. Chin, “Guaranteed outlier removal for point cloud registration with correspondences,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2868–2882, 2017.
  33. O. Enqvist, K. Josephson, and F. Kahl, “Optimal correspondences from pairwise constraints,” in 2009 IEEE 12th international conference on computer vision.   IEEE, 2009, pp. 1295–1302.
  34. J. Yang, X. Zhang, S. Fan, C. Ren, and Y. Zhang, “Mutual voting for ranking 3d correspondences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  35. T. Bailey, E. M. Nebot, J. Rosenblatt, and H. F. Durrant-Whyte, “Data association for mobile robot navigation: A graph theoretic approach,” in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), vol. 3.   IEEE, 2000, pp. 2512–2517.
  36. P. C. Lusk, D. Parikh, and J. P. How, “Graffmatch: Global matching of 3d lines and planes for wide baseline lidar registration,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 632–639, 2022.
  37. L. Carlone, “Estimation contracts for outlier-robust geometric perception,” Foundations and Trends® in Robotics, vol. 11, no. 2-3, pp. 90–224, 2023. [Online]. Available: http://dx.doi.org/10.1561/2300000077
  38. D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The trimmed iterative closest point algorithm,” in 2002 International Conference on Pattern Recognition, vol. 3.   IEEE, 2002, pp. 545–548.
  39. L. Peng, M. C. Tsakiris, and R. Vidal, “Arcs: Accurate rotation and correspondence search,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11 153–11 163.
  40. J. Wu, Y. Zheng, Z. Gao, Y. Jiang, X. Hu, Y. Zhu, J. Jiao, and M. Liu, “Quadratic pose estimation problems: Globally optimal solutions, solvability/observability analysis, and uncertainty description,” IEEE Transactions on Robotics, vol. 38, no. 5, pp. 3314–3335, 2022.
  41. Q.-Y. Zhou, J. Park, and V. Koltun, “Fast global registration,” in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14.   Springer, 2016, pp. 766–782.
  42. T.-J. Chin, Z. Cai, and F. Neumann, “Robust fitting in computer vision: Easy or hard?” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 701–716.
  43. M. J. Black and A. Rangarajan, “On the unification of line processes, outlier rejection, and robust statistics with applications in early vision,” International journal of computer vision, vol. 19, no. 1, pp. 57–91, 1996.
  44. A. Segal, D. Haehnel, and S. Thrun, “Generalized-icp.” in Robotics: science and systems, vol. 2, no. 4.   Seattle, WA, 2009, p. 435.
  45. M. Oh, E. Jung, H. Lim, W. Song, S. Hu, E. M. Lee, J. Park, J. Kim, J. Lee, and H. Myung, “Travel: Traversable ground and above-ground object segmentation using graph representation of 3d lidar scans,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7255–7262, 2022.
  46. H. Freeman and R. Shapira, “Determining the minimum-area encasing rectangle for an arbitrary closed curve,” Communications of the ACM, vol. 18, no. 7, pp. 409–413, 1975.
  47. O. Perron, “Zur theorie der matrices,” Mathematische Annalen, vol. 64, no. 2, pp. 248–263, 1907.
  48. H. Wolkowicz and G. P. Styan, “Bounds for eigenvalues using traces,” Linear algebra and its applications, vol. 29, pp. 471–506, 1980.
  49. H. Weyl, “Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen (mit einer anwendung auf die theorie der hohlraumstrahlung),” Mathematische Annalen, vol. 71, no. 4, pp. 441–479, 1912.
  50. R. A. Rossi, D. F. Gleich, and A. H. Gebremedhin, “Parallel maximum clique algorithms with applications to network analysis,” SIAM Journal on Scientific Computing, vol. 37, no. 5, pp. C589–C616, 2015.
  51. F. Dellaert and G. Contributors, “borglab/gtsam,” May 2022. [Online]. Available: https://github.com/borglab/gtsam)
  52. C. Holmes and T. D. Barfoot, “An efficient global optimality certificate for landmark-based slam,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1539–1546, 2023.
  53. H. Yang and L. Carlone, “Certifiably optimal outlier-robust geometric perception: Semidefinite relaxations and scalable global optimization,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 3, pp. 2816–2834, 2022.
  54. P. Agarwal, G. D. Tipaldi, L. Spinello, C. Stachniss, and W. Burgard, “Robust map optimization using dynamic covariance scaling,” in 2013 IEEE International Conference on Robotics and Automation.   Ieee, 2013, pp. 62–69.
  55. Z. Zhang, “Parameter estimation techniques: A tutorial with application to conic fitting,” Image and vision Computing, vol. 15, no. 1, pp. 59–76, 1997.
  56. C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2514–2523.
  57. X. Bai, Z. Luo, L. Zhou, H. Chen, L. Li, Z. Hu, H. Fu, and C.-L. Tai, “Pointdsc: Robust point cloud registration using deep spatial consistency,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 859–15 869.
  58. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE conference on computer vision and pattern recognition.   IEEE, 2012, pp. 3354–3361.
  59. Y. Liao, J. Xie, and A. Geiger, “Kitti-360: A novel dataset and benchmarks for urban scene understanding in 2d and 3d,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3292–3310, 2022.
  60. W. Lu, G. Wan, Y. Zhou, X. Fu, P. Yuan, and S. Song, “Deepvcp: An end-to-end deep neural network for point cloud registration,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 12–21.
  61. C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 8958–8966.
  62. C. Yuan, J. Lin, Z. Zou, X. Hong, and F. Zhang, “Std: Stable triangle descriptor for 3d place recognition,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 1897–1903.
  63. B. Jiang and S. Shen, “Contour context: Abstract structural distribution for 3d lidar loop detection and metric pose estimation,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023, p. 8386–8392.
  64. M. A. Uy and G. H. Lee, “Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4470–4479.
  65. P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures, vol. 1611.   Spie, 1992, pp. 586–606.
  66. M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registration for autonomous mining vehicles using 3d-ndt,” Journal of Field Robotics, vol. 24, no. 10, pp. 803–827, 2007.
  67. P. Zhou, X. Guo, X. Pei, and C. Chen, “T-loam: truncated least squares lidar-only odometry and mapping in real time,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2021.
  68. M. Weinmann, B. Jutzi, and C. Mallet, “Semantic 3d scene interpretation: A framework combining optimal neighborhood size selection with relevant features,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 2, pp. 181–188, 2014.
  69. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, J. Gall, and C. Stachniss, “Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset,” The International Journal on Robotics Research, vol. 40, no. 8-9, pp. 959–967, 2021.
  70. S. Choi, T. Kim, and W. Yu, “Performance evaluation of ransac family,” vol. 24, 01 2009.
  71. J. Nubert, E. Walther, S. Khattak, and M. Hutter, “Learning-based localizability estimation for robust lidar localization,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 17–24.
  72. J. G. Mangelson, D. Dominic, R. M. Eustice, and R. Vasudevan, “Pairwise consistent measurement set maximization for robust multi-robot map merging,” in 2018 IEEE international conference on robotics and automation (ICRA).   IEEE, 2018, pp. 2916–2923.
Citations (11)

Summary

  • The paper introduces a novel global registration framework using a Gaussian Ellipsoid Model to robustly align LiDAR point clouds.
  • It leverages a pyramid compatibility graph with a distrust-and-verify scheme to efficiently solve the maximum cliques problem for data association.
  • The approach demonstrates substantial improvements in registration accuracy and efficiency over state-of-the-art methods across diverse datasets.

Overview of "G3Reg: Pyramid Graph-based Global Registration using Gaussian Ellipsoid Model"

The paper introduces G3Reg, a framework for the global registration of LiDAR point clouds. This framework addresses the limitations associated with traditional keypoint and descriptor-based methods by utilizing a novel approach that involves the extraction of geometric primitives (planes, clusters, and lines) to develop a Gaussian Ellipsoid Model (GEM) representation. This model ensures a high probability that ground truth centers are encompassed, allowing for a more robust and efficient registration process.

Methodology

G3Reg comprises two key innovations: the GEM-based data association and a distrust-and-verify scheme facilitated by a Pyramid Compatibility Graph for Global Registration (PAGOR). The framework is built on three principal stages:

  1. GEM Extraction: Geometric primitives are extracted from raw point clouds, and each is parameterized by the proposed GEM. This encapsulates both a statistical representation of segments and a pseudo-Gaussian parameter to account for uncertainties in segment centers.
  2. Pyramid Compatibility Graph: The second innovation involves constructing a compatibility graph to test the pairwise compatibility of GEMs using a multi-threshold compatibility test. This graph-theoretical approach efficiently solves the maximum cliques problem to identify potential inlier sets.
  3. Transformation Estimation and Verification: Multiple candidate transformations are generated from MACs, and an evaluation function based on the Chamfer distance is employed to verify and select the most suitable transformation. The function accounts for the geometric fidelity of the alignment to maximize registration quality.

Numerical Results and Contributions

G3Reg was validated against several datasets, including public and self-collected multi-session datasets, demonstrating superior performance in registration accuracy and time efficiency compared to state-of-the-art approaches like FPFH and TEASER++. Notably, it achieves a high recall rate across diverse environmental scenarios, with improvements in handling challenging cases characterized by low overlaps and large viewpoint differences.

The framework's contributions are multi-faceted:

  • Robustness and Efficiency: The use of GEMs mitigates issues evident in descriptor consistency from traditional models, providing an efficient registration process even in the presence of high-density variations and occlusions.
  • Generalization to Various Scenarios: By maintaining consistent parameter settings across different scenarios, including various LiDAR types, the approach demonstrates a high degree of generalization.
  • Enhancement of Existing Frameworks: The authors advocate the integration of individual components, like GEM and PAGOR, into other frameworks to enhance their effectiveness.

Implications and Future Directions

The implications of G3Reg are pivotal, paving the way for more reliable and real-time solutions in robotics and autonomous systems, particularly in tasks such as loop closure and multi-session simultaneous localization and mapping (SLAM). The proposed distrust-and-verify paradigm provides significant flexibility in handling uncertainty and optimizing transformation selection.

For future work, the enhancement of segment-based methodologies through higher-level descriptors or incorporating semantic information could be explored. Additionally, advanced techniques for detecting the localizability of environments could further augment the robustness of point cloud registration methods in complex and unstructured environments.

G3Reg stands as a significant advancement in the field of global registration of LiDAR point clouds, contributing to more resilient and adaptable solutions necessary for real-world applications.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 33 likes about this paper.