2000 character limit reached
SE3ET: SE(3)-Equivariant Transformer for Low-Overlap Point Cloud Registration (2407.16823v1)
Published 23 Jul 2024 in cs.RO
Abstract: Partial point cloud registration is a challenging problem in robotics, especially when the robot undergoes a large transformation, causing a significant initial pose error and a low overlap between measurements. This work proposes exploiting equivariant learning from 3D point clouds to improve registration robustness. We propose SE3ET, an SE(3)-equivariant registration framework that employs equivariant point convolution and equivariant transformer designs to learn expressive and robust geometric features. We tested the proposed registration method on indoor and outdoor benchmarks where the point clouds are under arbitrary transformations and low overlapping ratios. We also provide generalization tests and run-time performance.
- W. Clark, M. Ghaffari, and A. Bloch, “Nonparametric continuous sensor registration,” J. Mach. Learning Res., vol. 22, no. 271, pp. 1–50, 2021.
- R. Zhang, T.-Y. Lin, C. E. Lin, S. A. Parkison, W. Clark, J. W. Grizzle, R. M. Eustice, and M. Ghaffari, “A new framework for registration of semantic point clouds from stereo and RGB-D cameras,” in Proc. IEEE Int. Conf. Robot. and Automation. IEEE, 2021, pp. 12 214–12 221.
- X. Huang, G. Mei, J. Zhang, and R. Abbas, “A comprehensive survey on point cloud registration,” arXiv preprint arXiv:2103.02690, 2021.
- Y. Wang, C. Yan, Y. Feng, S. Du, Q. Dai, and Y. Gao, “STORM: Structure-based overlap matching for partial point cloud registration,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
- S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler, “Predator: Registration of 3d point clouds with low overlap,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 4267–4276.
- Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng, and K. Xu, “Geometric transformer for fast and robust point cloud registration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 11 143–11 152.
- C. Deng, O. Litany, Y. Duan, A. Poulenard, A. Tagliasacchi, and L. J. Guibas, “Vector neurons: A general framework for SO(3)-equivariant networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 12 200–12 209.
- H. Chen, S. Liu, W. Chen, H. Li, and R. Hill, “Equivariant point network for 3d point cloud analysis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 14 514–14 523.
- M. Zhu, M. Ghaffari, W. A. Clark, and H. Peng, “E2PN: Efficient SE(3)-equivariant point network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023.
- T.-Y. Lin, M. Zhu, and M. Ghaffari, “Lie Neurons: Adjoint-equivariant neural networks for semisimple Lie algebras,” in International Conference on Machine Learning, 2024.
- H. Wang, Y. Liu, Z. Dong, and W. Wang, “You only hypothesize once: Point cloud registration with rotation-equivariant descriptors,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1630–1641.
- P. Besl and N. McKay, “A method for registration of 3-d shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 02, pp. 239–256, 1992.
- H. Yu, F. Li, M. Saleh, B. Busam, and S. Ilic, “Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration,” Advances in Neural Information Processing Systems, vol. 34, pp. 23 872–23 884, 2021.
- K. Fu, S. Liu, X. Luo, and M. Wang, “Robust point cloud registration framework based on deep graph matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 8893–8902.
- Y. Wang and J. M. Solomon, “Deep closest point: Learning representations for point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 3523–3532.
- A.-Q. Cao, G. Puy, A. Boulch, and R. Marlet, “PCAM: Product of cross-attention matrices for rigid registration of point clouds,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 13 229–13 238.
- Z. J. Yew and G. H. Lee, “Rpm-net: Robust point matching using learned features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 11 824–11 833.
- M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
- G. Mei, H. Tang, X. Huang, W. Wang, J. Liu, J. Zhang, L. Van Gool, and Q. Wu, “Unsupervised deep probabilistic approach for partial point cloud registration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 13 611–13 620.
- A. K. Aijazi and P. Checchin, “Non-repetitive scanning lidar sensor for robust 3d point cloud registration in localization and mapping applications,” Sensors, vol. 24, no. 2, p. 378, 2024.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- L. Zhu, D. Liu, C. Lin, R. Yan, F. Gómez-Fernández, N. Yang, and Z. Feng, “Point cloud registration using representative overlapping points,” arXiv preprint arXiv:2107.02583, 2021.
- Z. J. Yew and G. H. Lee, “Regtr: End-to-end point cloud correspondences with transformers,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2022, pp. 6677–6686.
- X. Huang, Y. Wang, S. Li, G. Mei, Z. Xu, Y. Wang, J. Zhang, and M. Bennamoun, “Robust real-world point cloud registration by inlier detection,” Comput. Vis. Image Understanding, vol. 224, p. 103556, 2022.
- J. Yu, L. Ren, Y. Zhang, W. Zhou, L. Lin, and G. Dai, “PEAL: Prior-embedded explicit attention learning for low-overlap point cloud registration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 17 702–17 711.
- H. Zhao, H. Zhuang, C. Wang, and M. Yang, “G3DOA: Generalizable 3d descriptor with overlap attention for point cloud registration,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 2541–2548, 2022.
- H. Deng, T. Birdal, and S. Ilic, “Ppfnet: Global context aware local features for robust 3d point matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2018, pp. 195–205.
- S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo, “Spinnet: Learning a general surface descriptor for 3d point cloud registration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2021, pp. 11 753–11 762.
- H. Yu, Z. Qin, J. Hou, M. Saleh, D. Li, B. Busam, and S. Ilic, “Rotation-invariant transformer for point cloud matching,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 5384–5393.
- S. Ao, Q. Hu, H. Wang, K. Xu, and Y. Guo, “Buffer: Balancing accuracy, efficiency, and generalizability in point cloud registration,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2023, pp. 1255–1264.
- M. Zhu, M. Ghaffari, and H. Peng, “Correspondence-free point cloud registration with SO(3)-equivariant implicit shape representations,” in Proceedings of the 5th Conference on Robot Learning. PMLR, 2022, pp. 1412–1422.
- F. Fuchs, D. Worrall, V. Fischer, and M. Welling, “SE(3)-transformers: 3d roto-translation equivariant attention networks,” Proc. Advances Neural Inform. Process. Syst. Conf., vol. 33, pp. 1970–1981, 2020.
- E. Chatzipantazis, S. Pertigkiozoglou, E. Dobriban, and K. Daniilidis, “SE(3)-equivariant attention networks for shape reconstruction in function space,” arXiv preprint arXiv:2204.02394, 2022.
- H. Wang, Y. Liu, Q. Hu, B. Wang, J. Chen, Z. Dong, Y. Guo, W. Wang, and B. Yang, “Roreg: Pairwise point cloud registration with oriented descriptors and local rotations,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
- H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas, “Kpconv: Flexible and deformable convolution for point clouds,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 6411–6420.
- T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 2117–2125.
- Q.-Y. Zhou, J. Park, and V. Koltun, “Open3d: A modern library for 3d data processing,” arXiv preprint arXiv:1801.09847, 2018.
- A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3dmatch: Learning local geometric descriptors from RGB-D reconstructions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2017, pp. 1802–1811.
- X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai, “D3feat: Joint learning of dense detection and description of 3d local features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2020, pp. 6359–6367.
- C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 8958–8966.
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. IEEE, 2012, pp. 3354–3361.
- C. E. Lin, J. Song, R. Zhang, M. Zhu, and M. Ghaffari, “SE(3)-equivariant point cloud-based place recognition,” in Proceedings of The 6th Conference on Robot Learning. PMLR, 2023, pp. 1520–1530.