DDM: A Metric for Comparing 3D Shapes Using Directional Distance Fields (2401.09736v5)
Abstract: Qualifying the discrepancy between 3D geometric models, which could be represented with either point clouds or triangle meshes, is a pivotal issue with board applications. Existing methods mainly focus on directly establishing the correspondence between two models and then aggregating point-wise distance between corresponding points, resulting in them being either inefficient or ineffective. In this paper, we propose DDM, an efficient, effective, robust, and differentiable distance metric for 3D geometry data. Specifically, we construct DDM based on the proposed implicit representation of 3D models, namely directional distance field (DDF), which defines the directional distances of 3D points to a model to capture its local surface geometry. We then transfer the discrepancy between two 3D geometric models as the discrepancy between their DDFs defined on an identical domain, naturally establishing model correspondence. To demonstrate the advantage of our DDM, we explore various distance metric-driven 3D geometric modeling tasks, including template surface fitting, rigid registration, non-rigid registration, scene flow estimation and human pose optimization. Extensive experiments show that our DDM achieves significantly higher accuracy under all tasks. As a generic distance metric, DDM has the potential to advance the field of 3D geometric modeling. The source code is available at https://github.com/rsy6318/DDM.
- P. Besl and N. D. McKay, “A method for registration of 3-d shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992.
- P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures, vol. 1611. Spie, 1992, pp. 586–606.
- Z. Deng, Y. Yao, B. Deng, and J. Zhang, “A robust loss for point cloud registration,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6138–6147.
- Y. Yao, B. Deng, W. Xu, and J. Zhang, “Fast and robust non-rigid registration using accelerated majorization-minimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 8, pp. 9681–9698, 2023.
- Y. Yang, C. Feng, Y. Shen, and D. Tian, “Foldingnet: Point cloud auto-encoder via deep grid deformation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 206–215.
- H. Wang, Q. Liu, X. Yue, J. Lasenby, and M. J. Kusner, “Unsupervised point cloud pre-training via occlusion completion,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 9782–9792.
- Y. Pang, W. Wang, F. E. Tay, W. Liu, Y. Tian, and L. Yuan, “Masked autoencoders for point cloud self-supervised learning,” in European conference on computer vision. Springer, 2022, pp. 604–621.
- R. Zhang, Z. Guo, P. Gao, R. Fang, B. Zhao, D. Wang, Y. Qiao, and H. Li, “Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training,” Advances in neural information processing systems, vol. 35, pp. 27 061–27 074, 2022.
- O. Michel, R. Bar-On, R. Liu, S. Benaim, and R. Hanocka, “Text2mesh: Text-driven neural stylization for meshes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 492–13 502.
- N. Mohammad Khalid, T. Xie, E. Belilovsky, and T. Popa, “Clip-mesh: Generating textured meshes from text using pretrained image-text models,” in SIGGRAPH Asia 2022 conference papers, 2022, pp. 1–8.
- W. Wu, Z. Y. Wang, Z. Li, W. Liu, and L. Fuxin, “Pointpwc-net: Cost volume on point clouds for (self-) supervised scene flow estimation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer, 2020, pp. 88–107.
- X. Li, J. Kaesemodel Pontes, and S. Lucey, “Neural scene flow prior,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 7838–7851.
- I. Lang, D. Aiger, F. Cole, S. Avidan, and M. Rubinstein, “Scoop: Self-supervised correspondence and optimization-based scene flow,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5281–5290.
- Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,” International Journal of Computer Vision, vol. 40, no. 2, p. 99, 2000.
- H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, and H. C. Wolf, “Parametric correspondence and chamfer matching: Two new techniques for image matching,” in Proceedings: Image Understanding Workshop. Science Applications, Inc, 1977, pp. 21–27.
- P. Cignoni, C. Rocchini, and R. Scopigno, “Metro: measuring error on simplified surfaces,” in Computer graphics forum, vol. 17, no. 2. Wiley Online Library, 1998, pp. 167–174.
- T. Wu, L. Pan, J. Zhang, T. Wang, Z. Liu, and D. Lin, “Balanced chamfer distance as a comprehensive metric for point cloud completion,” Advances in Neural Information Processing Systems, vol. 34, pp. 29 088–29 100, 2021.
- T. Nguyen, Q.-H. Pham, T. Le, T. Pham, N. Ho, and B.-S. Hua, “Point-set distances for learning representations of 3d point clouds,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 478–10 487.
- D. Urbach, Y. Ben-Shabat, and M. Lindenbaum, “Dpdist: Comparing point clouds using deep point cloud distance,” in Proceedings of the European Conference on Computer Vision. Springer, 2020, pp. 545–560.
- H. Pottmann, S. Leopoldseder, and M. Hofer, “Registration without icp,” Computer Vision and Image Understanding, vol. 95, no. 1, pp. 54–71, 2004.
- ——, “Approximation with active b-spline curves and surfaces,” in 10th Pacific Conference on Computer Graphics and Applications, 2002. Proceedings. IEEE, 2002, pp. 8–25.
- J. Chibane, T. Alldieck, and G. Pons-Moll, “Implicit functions in feature space for 3d shape reconstruction and completion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, pp. 6970–6981.
- C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652–660.
- C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, pp. 1–xxx, 2017.
- N. Sharp, S. Attaiki, K. Crane, and M. Ovsjanikov, “Diffusionnet: Discretization agnostic learning on surfaces,” ACM Transactions on Graphics (TOG), vol. 41, no. 3, pp. 1–16, 2022.
- M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson surface reconstruction,” in Proceedings of the fourth Eurographics symposium on Geometry processing, 2006, pp. 61–70.
- M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction,” ACM Transactions on Graphics, vol. 32, no. 3, pp. 1–13, 2013.
- L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp. 4460–4470.
- S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional occupancy networks,” in European Conference on Computer Vision. Springer, 2020, pp. 523–540.
- S. Peng, C. Jiang, Y. Liao, M. Niemeyer, M. Pollefeys, and A. Geiger, “Shape as points: A differentiable poisson solver,” Advances in Neural Information Processing Systems, vol. 34, pp. 13 032–13 044, 2021.
- A. Boulch and R. Marlet, “Poco: Point convolution for surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2022, pp. 6302–6314.
- H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proceedings of the 19th annual conference on computer graphics and interactive techniques, 1992, pp. 71–78.
- R. Kolluri, “Provably good moving least squares,” ACM Transactions on Algorithms, vol. 4, no. 2, pp. 1–25, 2008.
- Z.-Q. Cheng, Y.-Z. Wang, B. Li, K. Xu, G. Dang, and S.-Y. Jin, “A survey of methods for moving least squares surfaces,” in Proceedings of the Fifth Eurographics/IEEE VGTC conference on Point-Based Graphics, 2008, pp. 9–23.
- S.-L. Liu, H.-X. Guo, H. Pan, P.-S. Wang, X. Tong, and Y. Liu, “Deep implicit moving least-squares functions for 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp. 1788–1797.
- J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019, pp. 165–174.
- J. Chibane, G. Pons-Moll et al., “Neural unsigned distance fields for implicit function learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 638–21 652, 2020.
- B. Guillard, F. Stella, and P. Fua, “Meshudf: Fast and differentiable meshing of unsigned distance field networks,” in European Conference on Computer Vision, 2022, pp. 576–592.
- J. Ye, Y. Chen, N. Wang, and X. Wang, “Gifs: Neural implicit function for general shape representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2022, pp. 12 829–12 839.
- P.-S. Wang, Y. Liu, and X. Tong, “Dual octree graph networks for learning adaptive volumetric shape representations,” ACM Transactions on Graphics, vol. 41, no. 4, pp. 1–15, 2022.
- S. Ren, J. Hou, X. Chen, Y. He, and W. Wang, “Geoudf: Surface reconstruction from 3d point clouds via geometry-guided distance representation,” in Proceedings of the IEEE/CVF Internation Conference on Computer Vision, 2023, pp. 14 214–14 224.
- J. A. Bærentzen and H. Aanaes, “Signed distance computation using the angle weighted pseudonormal,” IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 3, pp. 243–253, 2005.
- G. Barill, N. G. Dickson, R. Schmidt, D. I. Levin, and A. Jacobson, “Fast winding numbers for soups and clouds,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1–12, 2018.
- W. E. Lorensen and H. E. Cline, “Marching cubes: A high resolution 3d surface construction algorithm,” ACM siggraph computer graphics, vol. 21, no. 4, pp. 163–169, 1987.
- D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the hausdorff distance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 850–863, 1993.
- M. W. Jones, “3d distance from a point to a triangle,” Department of Computer Science, University of Wales Swansea Technical Report CSR-5, p. 5, 1995.
- Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Image and vision computing, vol. 10, no. 3, pp. 145–155, 1992.
- S. Rusinkiewicz, “A symmetric objective function for icp,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–7, 2019.
- N. Aspert, D. Santa-Cruz, and T. Ebrahimi, “Mesh: Measuring errors between surfaces using the hausdorff distance,” in Proceedings. IEEE International Conference on Multimedia and Expo, vol. 1. IEEE, 2002, pp. 705–708.
- W. Feng, J. Zhang, H. Cai, H. Xu, J. Hou, and H. Bao, “Recurrent multi-view alignment network for unsupervised surface registration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 297–10 307.
- R. Hanocka, G. Metzer, R. Giryes, and D. Cohen-Or, “Point2mesh: a self-prior for deformable meshes,” ACM Transactions on Graphics (TOG), vol. 39, no. 4, pp. 126–1, 2020.
- B. Nicolet, A. Jacobson, and W. Jakob, “Large steps in inverse rendering of geometry,” ACM Transactions on Graphics, vol. 40, no. 6, pp. 1–13, 2021.
- Y. Jung, H. Kim, G. Hwang, S.-H. Baek, and S. Lee, “Mesh density adaptation for template-based shape reconstruction,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023, pp. 1–10.
- B. L. Bhatnagar, C. Sminchisescu, C. Theobalt, and G. Pons-Moll, “Loopreg: Self-supervised learning of implicit surface correspondences, pose and shape for 3d human mesh registration,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 909–12 922, 2020.
- M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” ACM Trans. Graph., vol. 34, no. 6, oct 2015.
- M. Smid, “Closest-point problems in computational geometry,” in Handbook of computational geometry. Elsevier, 2000, pp. 877–935.
- R. W. Sumner, J. Schmid, and M. Pauly, “Embedded deformation for shape manipulation,” in ACM SIGGRAPH 2007 Papers, 2007, p. 80–es.
- Y. Qiu, X. Xu, L. Qiu, Y. Pan, Y. Wu, W. Chen, and X. Han, “3dcaricshop: A dataset and a baseline method for single-view 3d caricature face reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 236–10 245.
- A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser, “3dmatch: Learning local geometric descriptors from rgb-d reconstructions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1802–1811.
- X. Bai, Z. Luo, L. Zhou, H. Chen, L. Li, Z. Hu, H. Fu, and C.-L. Tai, “Pointdsc: Robust point cloud registration using deep spatial consistency,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 859–15 869.
- C. Choy, W. Dong, and V. Koltun, “Deep global registration,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2514–2523.
- X. Zhang, J. Yang, S. Zhang, and Y. Zhang, “3d registration with maximal cliques,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17 745–17 754.
- D. Vlasic, I. Baran, W. Matusik, and J. Popović, “Articulated mesh animation from multi-view silhouettes,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 1–9, 2008.
- N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040–4048.
- Q. Ma, J. Yang, A. Ranjan, S. Pujades, G. Pons-Moll, S. Tang, and M. J. Black, “Learning to dress 3d people in generative clothing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6469–6478.