PointDifformer: Robust Point Cloud Registration With Neural Diffusion and Transformer (2404.14034v1)
Abstract: Point cloud registration is a fundamental technique in 3-D computer vision with applications in graphics, autonomous driving, and robotics. However, registration tasks under challenging conditions, under which noise or perturbations are prevalent, can be difficult. We propose a robust point cloud registration approach that leverages graph neural partial differential equations (PDEs) and heat kernel signatures. Our method first uses graph neural PDE modules to extract high dimensional features from point clouds by aggregating information from the 3-D point neighborhood, thereby enhancing the robustness of the feature representations. Then, we incorporate heat kernel signatures into an attention mechanism to efficiently obtain corresponding keypoints. Finally, a singular value decomposition (SVD) module with learnable weights is used to predict the transformation between two point clouds. Empirical experiments on a 3-D point cloud dataset demonstrate that our approach not only achieves state-of-the-art performance for point cloud registration but also exhibits better robustness to additive noise or 3-D shape perturbations.
- L. Zhang, J. Guo, Z. Cheng, J. Xiao, and X. Zhang, “Efficient pairwise 3-D registration of urban scenes via hybrid structural descriptors,” IEEE Trans. Geosci. Remote. Sens., vol. 60, pp. 1–17, 2022.
- S. Quan and J. Yang, “Compatibility-guided sampling consensus for 3-D point cloud registration,” IEEE Trans. Geosci. Remote. Sens., vol. 58, no. 10, pp. 7380–7392, 2020.
- Q. Kang, R. She, S. Wang, W. Tay, D. Navarro, and A. Hartmannsgruber, “Location learning for AVs: LiDAR and image landmarks fusion localization with graph neural networks,” in Proc. IEEE Int. Conf. Intell. Transp. Syst., 2022, pp. 3032–3037.
- L. Sun, Z. Zhang, R. Zhong, D. Chen, L. Zhang, L. Zhu, Q. Wang, G. Wang, J. Zou, and Y. Wang, “A weakly supervised graph deep learning framework for point cloud registration,” IEEE Trans. Geosci. Remote. Sens., vol. 60, pp. 1–12, 2022.
- P. Zhou, X. Guo, X. Pei, and C. Chen, “T-LOAM: Truncated least squares LiDAR-only odometry and mapping in real time,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–13, 2021.
- W. Shi and R. Rajkumar, “Point-GNN: Graph neural network for 3D object detection in a point cloud,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2020, pp. 1711–1719.
- T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping,” in Proc. IEEE Int. Conf. Intell. Robot. Syst., 2020, pp. 5135–5142.
- J. Zhang and S. Singh, “LOAM: LiDAR odometry and mapping in real-time,” in Proc. Robot. Sci. Syst., 2014, pp. 1–9.
- Y. Wang and J. M. Solomon, “Deep closest point: Learning representations for point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision, 2019, pp. 3523–3532.
- P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” in IEEE Trans. Pattern Anal. Machine Intell., vol. 14, no. 2, 1992, pp. 239–256.
- J. Yang, H. Li, and Y. Jia, “Go-ICP: Solving 3D registration efficiently and globally optimally,” in Proc. IEEE Int. Conf. Comput. Vision, 2013, pp. 1457–1464.
- K. Koide, M. Yokozuka, S. Oishi, and A. Banno, “Voxelized GICP for fast and accurate 3D point cloud registration,” in Proc. IEEE Int. Conf. Robot. Autom., 2021, pp. 11 054–11 059.
- H. Wei, Z. Qiao, Z. Liu, C. Suo, P. Yin, Y. Shen, H. Li, and H. Wang, “End-to-end 3D point cloud learning for registration task using virtual correspondences,” in Proc. IEEE Int. Conf. Intell. Robot. Syst., 2020, pp. 2678–2683.
- J. Xu, Y. Huang, Z. Wan, and J. Wei, “GLORN: Strong generalization fully convolutional network for low-overlap point cloud registration,” IEEE Trans. Geosci. Remote. Sens., vol. 60, pp. 1–14, 2022.
- Y. Zhang, J. Xu, Y. Zou, P. X. Liu, and J. Liu, “PS-Net: Point shift network for 3-D point cloud completion,” IEEE Trans. Geosci. Remote. Sens., vol. 60, pp. 1–13, 2022.
- F. Lu, G. Chen, Y. Liu, L. Zhang, S. Qu, S. Liu, and R. Gu, “HRegNet: A hierarchical network for large-scale outdoor LiDAR point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision, 2021, pp. 16 014–16 023.
- C. Choy, J. Park, and V. Koltun, “Fully convolutional geometric features,” in Proc. IEEE Int. Conf. Comput. Vision, 2019, pp. 8958–8966.
- X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai, “D3Feat: Joint learning of dense detection and description of 3D local features,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2020, pp. 6359–6367.
- S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo, “SpinNet: Learning a general surface descriptor for 3D point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2021, pp. 11 753–11 762.
- H. Yu, F. Li, M. Saleh, B. Busam, and S. Ilic, “CoFiNet: Reliable coarse-to-fine correspondences for robust pointcloud registration,” Adv. Neural Inform. Process. Syst., vol. 34, pp. 23 872–23 884, 2021.
- J. Li, Q. Hu, and M. Ai, “Point cloud registration based on one-point ransac and scale-annealing biweight estimation,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 11, pp. 9716–9729, 2021.
- F. Wang, H. Hu, X. Ge, B. Xu, R. Zhong, Y. Ding, X. Xie, and Q. Zhu, “Multientity registration of point clouds for dynamic objects on complex floating platform using object silhouettes,” IEEE Trans. Geosci. Remote Sens., vol. 59, no. 1, pp. 769–783, 2020.
- S. Chen, L. Nan, R. Xia, J. Zhao, and P. Wonka, “PLADE: A plane-based descriptor for point cloud registration with small overlap,” IEEE Trans. Geosci. Remote. Sens., vol. 58, no. 4, pp. 2530–2540, 2019.
- J. Yu, Y. Lin, B. Wang, Q. Ye, and J. Cai, “An advanced outlier detected total least-squares algorithm for 3-D point clouds registration,” IEEE Trans. Geosci. Remote. Sens., vol. 57, no. 7, pp. 4789–4798, 2019.
- Y. Zhao, Y. Li, H. Zhang, V. Monga, and Y. C. Eldar, “A convergent neural network for non-blind image deblurring,” in Proc. IEEE Int. Conf. Image Process., 2023, pp. 1505–1509.
- Y. Song, Q. Kang, S. Wang, Z. Kai, and W. P. Tay, “On the robustness of graph neural diffusion to topology perturbations,” in Adv. Neural Inform. Process. Syst., vol. 35, 2022, pp. 6384–6396.
- J. Sun, M. Ovsjanikov, and L. Guibas, “A concise and provably informative multi-scale signature based on heat diffusion,” Comput. Graphics Forum, vol. 28, no. 5, pp. 1383–1392, 2009.
- M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” ACM Commun., vol. 24, no. 6, pp. 381–395, 1981.
- H. Maron, N. Dym, I. Kezurer, S. Kovalsky, and Y. Lipman, “Point registration via efficient convex relaxation,” ACM Trans. Graphics, vol. 35, no. 4, pp. 1–12, 2016.
- T. Guérout, Y. Gaoua, C. Artigues, G. Da Costa, P. Lopez, and T. Monteil, “Mixed integer linear programming for quality of service optimization in clouds,” Future Gener. Comput. Syst., vol. 71, pp. 1–17, 2017.
- A. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” in Proc. Robot. Sci. Syst., vol. 2, no. 4, 2009, p. 435.
- H. Deng, T. Birdal, and S. Ilic, “PPF-FoldNet: Unsupervised learning of rotation invariant 3D local descriptors,” in Proc. Eur. Conf. Comput. Vision, 2018, pp. 602–618.
- H. Deng, T. Birdal, and et al., “PPFNet: Global context aware local features for robust 3D point matching,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2018, pp. 195–205.
- Z. Gojcic, C. Zhou, J. D. Wegner, and A. Wieser, “The perfect match: 3D point cloud matching with smoothed densities,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2019, pp. 5545–5554.
- S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler, “Predator: Registration of 3D point clouds with low overlap,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2021, pp. 4267–4276.
- Z. Chen, K. Sun, F. Yang, and W. Tao, “SC22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT-PCR: A second order spatial compatibility for efficient and robust point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2022, pp. 13 221–13 231.
- X. Zhang, J. Yang, S. Zhang, and Y. Zhang, “3D registration with maximal cliques,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2023, pp. 17 745–17 754.
- W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT-Net: Towards learning based LiDAR localization for autonomous driving,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2019, pp. 6389–6398.
- W. Lu, G. Wan, Y. Zhou, X. Fu, P. Yuan, and S. Song, “DeepVCP: An end-to-end deep neural network for point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision, 2019, pp. 12–21.
- Y. Aoki, H. Goforth, R. A. Srivatsan, and S. Lucey, “PointNetLK: Robust & efficient point cloud registration using pointnet,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2019, pp. 7163–7172.
- C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” Adv. Neural Inform. Process. Syst., vol. 30, pp. 1–10, 2017.
- Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng, and K. Xu, “Geometric transformer for fast and robust point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2022, pp. 11 143–11 152.
- C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2020, pp. 2514–2523.
- H. Wang, Y. Liu, Z. Dong, and W. Wang, “You only hypothesize once: Point cloud registration with rotation-equivariant descriptors,” in Proc. ACM Int. Conf. Multimedias, 2022, pp. 1630–1641.
- M. Zhao, L. Ma, X. Jia, D.-M. Yan, and T. Huang, “GraphReg: Dynamical point cloud registration with geometry-aware graph signal processing,” IEEE Trans. Image Process., vol. 31, pp. 7449–7464, 2022.
- F. Poiesi and D. Boscaini, “Learning general and distinctive 3D local deep descriptors for point cloud registration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 3, pp. 3979–3985, 2023.
- W. Tang and D. Zou, “Multi-instance point cloud registration by efficient correspondence clustering,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2022, pp. 6667–6676.
- J. Lee, S. Kim, M. Cho, and J. Park, “Deep hough voting for robust global registration,” in Proc. IEEE Int. Conf. Comput. Vision, 2021, pp. 15 994–16 003.
- G. D. Pais, S. Ramalingam, V. M. Govindu, J. C. Nascimento, R. Chellappa, and P. Miraldo, “3DRegNet: A deep neural network for 3D point registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2020, pp. 7193–7203.
- K. Fischer, M. Simon, F. Olsner, S. Milz, H.-M. Gross, and P. Mader, “StickyPillars: Robust and efficient feature matching on point clouds using graph neural networks,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2021, pp. 313–323.
- X. Bai, Z. Luo, L. Zhou, H. Chen, L. Li, Z. Hu, H. Fu, and C.-L. Tai, “PointDSC: Robust point cloud registration using deep spatial consistency,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2021, pp. 15 859–15 869.
- Z. Zhang, Y. Dai, B. Fan, J. Sun, and M. He, “Learning a task-specific descriptor for robust matching of 3D point clouds,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 12, pp. 8462–8475, 2022.
- Y. Li and T. Harada, “Lepard: Learning partial point cloud matching in rigid and deformable scenes,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2022, pp. 5554–5564.
- H. Yu, Z. Qin, J. Hou, M. Saleh, D. Li, B. Busam, and S. Ilic, “Rotation-invariant transformer for point cloud matching,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2023, pp. 5384–5393.
- S. Ao, Q. Hu, H. Wang, K. Xu, and Y. Guo, “BUFFER: Balancing accuracy, efficiency, and generalizability in point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2023, pp. 1255–1264.
- H. Wang, Y. Liu, Q. Hu, B. Wang, J. Chen, Z. Dong, Y. Guo, W. Wang, and B. Yang, “RoReg: Pairwise point cloud registration with oriented descriptors and local rotations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, pp. 10 376–10 393, 2023.
- J. Yu, L. Ren, Y. Zhang, W. Zhou, L. Lin, and G. Dai, “PEAL: Prior-embedded explicit attention learning for low-overlap point cloud registration,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2023, pp. 17 702–17 711.
- H. Jiang, Z. Dang, Z. Wei, J. Xie, J. Yang, and M. Salzmann, “Robust outlier rejection for 3D registration with variational bayes,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2023, pp. 1148–1157.
- Y. Zhou and O. Tuzel, “VoxelNet: End-to-end learning for point cloud based 3D object detection,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2018, pp. 4490–4499.
- V. A. Sindagi, Y. Zhou, and O. Tuzel, “MVX-Net: Multimodal voxelnet for 3D object detection,” in Proc. IEEE Int. Conf. Robot. Autom., 2019, pp. 7276–7282.
- O. Kopuklu, N. Kose, A. Gunduz, and G. Rigoll, “Resource efficient 3D convolutional neural networks,” in Proc. IEEE Int. Conf. Comput. Vision Workshops, 2019, pp. 1–10.
- S. Kumawat and S. Raman, “LP-3DCNN: Unveiling local phase in 3D convolutional neural networks,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2019, pp. 4903–4912.
- H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller, “Multi-view convolutional neural networks for 3D shape recognition,” in Proc. IEEE Int. Conf. Comput. Vision, 2015, pp. 945–953.
- C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2017, pp. 652–660.
- Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph CNN for learning on point clouds,” ACM Trans. Graphics, vol. 38, no. 5, pp. 1–12, 2019.
- Z. Liu, S. Zhou, C. Suo, P. Yin, W. Chen, H. Wang, H. Li, and Y.-H. Liu, “LPD-Net: 3D point cloud learning for large-scale place recognition and environment analysis,” in Proc. IEEE Int. Conf. Comput. Vision, 2019, pp. 2831–2840.
- H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas, “KPConv: Flexible and deformable convolution for point clouds,” in Proc. IEEE Int. Conf. Comput. Vision, 2019, pp. 6411–6420.
- Y. Rao, J. Lu, and J. Zhou, “Global-local bidirectional reasoning for unsupervised representation learning of 3D point clouds,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2020, pp. 5376–5385.
- F. Poiesi and D. Boscaini, “Distinctive 3D local deep descriptors,” in Proc. IEEE Int. Conf. Pattern Recognit., 2021, pp. 5720–5727.
- M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, “PCT: Point cloud transformer,” Comput. Vis. Media, vol. 7, pp. 187–199, 2021.
- X. Ma, C. Qin, H. You, H. Ran, and Y. Fu, “Rethinking network design and local geometry in point cloud: A simple residual MLP framework,” arXiv preprint arXiv:2202.07123, 2022.
- G. Qian, Y. Li, H. Peng, J. Mai, H. Hammoud, M. Elhoseiny, and B. Ghanem, “PointNeXt: Revisiting PointNet++ with improved training and scaling strategies,” Adv. Neural Inform. Process. Syst., vol. 35, pp. 23 192–23 204, 2022.
- R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, “Neural ordinary differential equations,” Adv. Neural Inform. Process. Syst., vol. 31, pp. 1–13, 2018.
- B. P. Chamberlain, J. Rowbottom, M. Goronova, S. Webb, E. Rossi, and M. M. Bronstein, “GRAND: Graph neural diffusion,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 1407–1418.
- Z.-Q. Chen, Z. Qian, Y. Hu, and W. Zheng, “Stability and approximations of symmetric diffusion semigroups and kernels,” J. functional anal., vol. 152, no. 1, pp. 255–280, 1998.
- B. P. Chamberlain, J. Rowbottom, D. Eynard, F. Di Giovanni, D. Xiaowen, and M. M. Bronstein, “Beltrami flow and neural diffusion on graphs,” Adv. Neural Inform. Process. Syst., pp. 1–16, 2021.
- S. Wang, Q. Kang, R. She, W. P. Tay, A. Hartmannsgruber, and D. N. Navarro, “RobustLoc: Robust camera pose regression in challenging driving environments,” in Proc. AAAI Conf. Artificial Intell., 2022, pp. 1–8.
- Q. Kang, Y. Song, Q. Ding, and W. P. Tay, “Stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks,” Adv. Neural Inform. Process. Syst., vol. 34, pp. 14 925–14 937, 2021.
- Y. Zhao, F. Dai, and J. Shi, “A dredge traffic algorithm for maintaining network stability,” in Proc. Int. Conf. Commun. Signal Process. Syst., 2020, pp. 1100–1108.
- K. Zhao, Q. Kang, Y. Song, R. She, S. Wang, and W. P. Tay, “Adversarial robustness in graph neural networks: A Hamiltonian approach,” arXiv preprint arXiv:2310.06396, 2023.
- R. She, Q. Kang, S. Wang, Y.-R. Yang, K. Zhao, Y. Song, and W. P. Tay, “RobustMat: Neural diffusion for street landmark patch matching under challenging environments,” IEEE Trans. Image Process., vol. 32, pp. 5550–5563, 2023.
- Z. Chen, W. Zeng, Z. Yang, L. Yu, C.-W. Fu, and H. Qu, “LassoNet: Deep lasso-selection of 3D point clouds,” IEEE Trans. Vis. Comput. Graphics, vol. 26, no. 1, pp. 195–204, 2019.
- M. M. Bronstein and I. Kokkinos, “Scale-invariant heat kernel signatures for non-rigid shape recognition,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2010, pp. 1704–1711.
- G. K. Tam, Z.-Q. Cheng, Y.-K. Lai, F. C. Langbein, Y. Liu, D. Marshall, R. R. Martin, X.-F. Sun, and P. L. Rosin, “Registration of 3D point clouds and meshes: A survey from rigid to nonrigid,” IEEE Trans. Vis. Comput. Graphics, vol. 19, no. 7, pp. 1199–1217, 2012.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Adv. Neural Inform. Process. Syst., vol. 30, pp. 1–11, 2017.
- A. Kendall and R. Cipolla, “Geometric loss functions for camera pose regression with deep learning,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit., 2017, pp. 5974–5983.
- K. Burnett, D. J. Yoon, Y. Wu, A. Z. Li, H. Zhang, S. Lu, J. Qian, W.-K. Tseng, A. Lambert, K. Y. Leung et al., “Boreas: A multi-season autonomous driving dataset,” Int. J. Rob. Res., vol. 42, no. 1-2, pp. 33–42, 2023.
- W. Song, D. Li, S. Sun, X. Xu, and G. Zu, “Registration for 3D LiDAR datasets using Pyramid Reference Object,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–9, 2023.
- A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in Proc. IEEE Int. Conf. Comput. Vision Pattern Recognit. IEEE, 2012, pp. 3354–3361.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Representations, 2015, pp. 1–15.
- Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao, “Hypergraph neural networks,” in Proc. AAAI Conf. Artificial Intell., 2019, pp. 3558–3565.
- Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell, “Rethinking the value of network pruning,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–13.
- T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, “Optuna: A next-generation hyperparameter optimization framework,” in Proc. ACM SIGKDD Int. Conf. Knowl. Discov. & Data Mining, 2019, pp. 2623–2631.
- G. Fang, X. Ma, and X. Wang, “Structural pruning for diffusion models,” arXiv preprint arXiv:2305.10924, 2023.
- OpenAI, “ChatGPT [Large language model],” 2023. [Online]. Available: https://chat.openai.com/chat
- Rui She (37 papers)
- Qiyu Kang (25 papers)
- Sijie Wang (21 papers)
- Wee Peng Tay (101 papers)
- Kai Zhao (160 papers)
- Yang Song (299 papers)
- Tianyu Geng (2 papers)
- Yi Xu (304 papers)
- Diego Navarro Navarro (4 papers)
- Andreas Hartmannsgruber (4 papers)