Monge-Ampere Regularization for Learning Arbitrary Shapes from Point Clouds (2410.18477v3)
Abstract: As commonly used implicit geometry representations, the signed distance function (SDF) is limited to modeling watertight shapes, while the unsigned distance function (UDF) is capable of representing various surfaces. However, its inherent theoretical shortcoming, i.e., the non-differentiability at the zero level set, would result in sub-optimal reconstruction quality. In this paper, we propose the scaled-squared distance function (S${2}$DF), a novel implicit surface representation for modeling arbitrary surface types. S${2}$DF does not distinguish between inside and outside regions while effectively addressing the non-differentiability issue of UDF at the zero level set. We demonstrate that S${2}$DF satisfies a second-order partial differential equation of Monge-Ampere-type, allowing us to develop a learning pipeline that leverages a novel Monge-Ampere regularization to directly learn S${2}$DF from raw unoriented point clouds without supervision from ground-truth S${2}$DF values. Extensive experiments across multiple datasets show that our method significantly outperforms state-of-the-art supervised approaches that require ground-truth surface information as supervision for training. The source code is available at https://github.com/chuanxiang-yang/S2DF.
- A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 3789–3799.
- V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Advances in neural information processing systems, vol. 33, pp. 7462–7473, 2020.
- Y. Ben-Shabat, C. H. Koneputugodage, and S. Gould, “Digs: Divergence guided shape implicit neural representation for unoriented point clouds,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 323–19 332.
- Z. Wang, Y. Zhang, R. Xu, F. Zhang, P.-S. Wang, S. Chen, S. Xin, W. Wang, and C. Tu, “Neural-singular-hessian: Implicit neural representation of unoriented point clouds by enforcing singular hessian,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–14, 2023.
- C. H. Koneputugodage, Y. Ben-Shabat, and S. Gould, “Octree guided unoriented surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 717–16 726.
- H. Yang, Y. Sun, G. Sundaramoorthi, and A. Yezzi, “Stabilizing the optimization of neural signed distance functions and finer shape representation,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- M. Fainstein, V. Siless, and E. Iarussi, “Dudf: Differentiable unsigned distance fields with hyperbolic scaling,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4484–4493.
- N. S. Trudinger and X.-J. Wang, “The monge-ampère equation and its geometric applications,” Handbook of geometric analysis, vol. 1, pp. 467–524, 2008.
- Z. Huang, Y. Wen, Z. Wang, J. Ren, and K. Jia, “Surface reconstruction from point clouds: A survey and a benchmark,” arXiv preprint arXiv:2205.02413, 2022.
- M. Berger, A. Tagliasacchi, L. M. Seversky, P. Alliez, G. Guennebaud, J. A. Levine, A. Sharf, and C. T. Silva, “A survey of surface reconstruction from point clouds,” in Computer graphics forum, vol. 36. Wiley Online Library, 2017, pp. 301–329.
- R. Sulzer, R. Marlet, B. Vallet, and L. Landrieu, “A survey and benchmark of automatic surface reconstruction from point clouds,” arXiv preprint arXiv:2301.13656, 2023.
- Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H.-P. Seidel, “Multi-level partition of unity implicits,” in ACM SIGGRAPH 2005 Courses, ser. SIGGRAPH ’05. New York, NY, USA: Association for Computing Machinery, 2005, p. 173–es. [Online]. Available: https://doi.org/10.1145/1198555.1198649
- H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, “Surface reconstruction from unorganized points,” in Proceedings of the 19th annual conference on computer graphics and interactive techniques, 1992, pp. 71–78.
- A. C. Öztireli, G. Guennebaud, and M. Gross, “Feature preserving point set surfaces based on non-linear kernel regression,” Computer Graphics Forum, vol. 28, no. 2, pp. 493–501, 2009. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8659.2009.01388.x
- R. Kolluri, “Provably good moving least squares,” ACM Transactions on Algorithms (TALG), vol. 4, no. 2, pp. 1–25, 2008.
- C. Shen, J. F. O’Brien, and J. R. Shewchuk, “Interpolating and approximating implicit surfaces from polygon soup,” in ACM SIGGRAPH 2004 Papers, ser. SIGGRAPH ’04. New York, NY, USA: Association for Computing Machinery, 2004, p. 896–904. [Online]. Available: https://doi.org/10.1145/1186562.1015816
- M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva, “Point set surfaces,” in Proceedings Visualization, 2001. VIS’01. IEEE, 2001, pp. 21–29.
- Z. Huang, N. Carr, and T. Ju, “Variational implicit point set surfaces,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–13, 2019.
- J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans, “Reconstruction and representation of 3d objects with radial basis functions,” in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001, pp. 67–76.
- M. Li, F. Chen, W. Wang, and C. Tu, “Sparse rbf surface representations,” Computer Aided Geometric Design, vol. 48, pp. 49–59, 2016.
- G. Turk and J. F. O’brien, “Modelling with implicit surfaces that interpolate,” ACM Transactions on Graphics (TOG), vol. 21, no. 4, pp. 855–873, 2002.
- M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson surface reconstruction,” in Proceedings of the fourth Eurographics symposium on Geometry processing, vol. 7, 2006.
- M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction,” ACM Transactions on Graphics (ToG), vol. 32, no. 3, pp. 1–13, 2013.
- M. Kazhdan, M. Chuang, S. Rusinkiewicz, and H. Hoppe, “Poisson surface reconstruction with envelope constraints,” in Computer graphics forum, vol. 39. Wiley Online Library, 2020, pp. 173–182.
- M. Bolitho, M. Kazhdan, R. Burns, and H. Hoppe, “Parallel poisson surface reconstruction,” in Advances in Visual Computing: 5th International Symposium, ISVC 2009, Las Vegas, NV, USA, November 30-December 2, 2009, Proceedings, Part I 5. Springer, 2009, pp. 678–689.
- F. Hou, C. Wang, W. Wang, H. Qin, C. Qian, and Y. He, “Iterative poisson surface reconstruction (ipsr) for unoriented points,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–13, 2022.
- S. Sellán and A. Jacobson, “Stochastic poisson surface reconstruction,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–12, 2022.
- S. Lin, D. Xiao, Z. Shi, and B. Wang, “Surface reconstruction from point clouds without normals by parametrizing the gauss formula,” ACM Transactions on Graphics, vol. 42, no. 2, pp. 1–19, 2022.
- J. Huang, H.-X. Chen, and S.-M. Hu, “A neural galerkin solver for accurate surface reconstruction,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–16, 2022.
- S.-L. Liu, H.-X. Guo, H. Pan, P.-S. Wang, X. Tong, and Y. Liu, “Deep implicit moving least-squares functions for 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1788–1797.
- L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4460–4470.
- S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, “Convolutional occupancy networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer, 2020, pp. 523–540.
- A. Boulch and R. Marlet, “Poco: Point convolution for surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 6302–6314.
- Z. Wang, S. Zhou, J. J. Park, D. Paschalidou, S. You, G. Wetzstein, L. Guibas, and A. Kadambi, “Alto: Alternating latent topologies for implicit 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 259–270.
- J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174.
- C. Jiang, A. Sud, A. Makadia, J. Huang, M. Nießner, T. Funkhouser et al., “Local implicit grid representations for 3d scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6001–6010.
- R. Chabra, J. E. Lenssen, E. Ilg, T. Schmidt, J. Straub, S. Lovegrove, and R. Newcombe, “Deep local shapes: Learning local sdf priors for detailed 3d reconstruction,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16. Springer, 2020, pp. 608–625.
- M. Wang, Y.-S. Liu, Y. Gao, K. Shi, Y. Fang, and Z. Han, “Lp-dif: Learning local pattern-specific deep implicit function for 3d objects and scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 21 856–21 865.
- M. Atzmon and Y. Lipman, “Sal: Sign agnostic learning of shapes from raw data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2565–2574.
- ——, “Sald: Sign agnostic learning with derivatives,” in International Conference on Learning Representations, 2020.
- B. Ma, Z. Han, Y.-S. Liu, and M. Zwicker, “Neural-pull: Learning signed distance function from point clouds by learning to pull space onto surface,” in International Conference on Machine Learning. PMLR, 2021, pp. 7246–7257.
- J. Chibane, G. Pons-Moll et al., “Neural unsigned distance fields for implicit function learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 21 638–21 652, 2020.
- J. Chibane, T. Alldieck, and G. Pons-Moll, “Implicit functions in feature space for 3d shape reconstruction and completion,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 6970–6981.
- L. Wang, W. Chen, X. Meng, B. Yang, J. Li, L. Gao et al., “Hsdf: Hybrid sign and distance field for modeling surfaces with arbitrary topologies,” Advances in Neural Information Processing Systems, vol. 35, pp. 32 172–32 185, 2022.
- J. Ye, Y. Chen, N. Wang, and X. Wang, “Gifs: Neural implicit function for general shape representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 829–12 839.
- S. Ren, J. Hou, X. Chen, Y. He, and W. Wang, “Geoudf: Surface reconstruction from 3d point clouds via geometry-guided distance representation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 14 214–14 224.
- J. Zhou, B. Ma, Y.-S. Liu, Y. Fang, and Z. Han, “Learning consistency-aware unsigned distance functions progressively from raw point clouds,” Advances in neural information processing systems, vol. 35, pp. 16 481–16 494, 2022.
- J. Zhou, B. Ma, S. Li, Y.-S. Liu, Y. Fang, and Z. Han, “Cap-udf: Learning unsigned distance functions progressively from raw point clouds with consistency-aware field optimization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–18, 2024.
- J. Zhou, B. Ma, S. Li, Y.-S. Liu, and Z. Han, “Learning a more continuous zero level set in unsigned distance fields through level set projection,” in Proceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 3181–3192.
- Y. Lu, L. Wan, N. Ding, Y. Wang, S. Shen, S. Cai, and L. Gao, “Unsigned orthogonal distance fields: An accurate neural implicit representation for diverse 3d shapes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 551–20 560.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
- B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll, “Multi-garment net: Learning to dress 3d people from images,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5420–5430.
- F. Hou, X. Chen, W. Wang, H. Qin, and Y. He, “Robust zero level-set extraction from unsigned distance fields based on double covering,” ACM Transactions on Graphics (TOG), vol. 42, no. 6, pp. 1–15, 2023.
- A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
- Q.-Y. Zhou and V. Koltun, “Dense scene reconstruction with points of interest,” ACM Transactions on Graphics (ToG), vol. 32, no. 4, pp. 1–8, 2013.
- P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2446–2454.
- Q. Xu, W. Wang, D. Ceylan, R. Mech, and U. Neumann, “Disn: Deep implicit surface network for high-quality single-view 3d reconstruction,” Advances in neural information processing systems, vol. 32, 2019.
- Q. Zhou and A. Jacobson, “Thingi10k: A dataset of 10,000 3d-printing models,” arXiv preprint arXiv:1605.04797, 2016.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.