Emergent Mind

3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting

(2404.00409)
Published Mar 30, 2024 in cs.CV and cs.GR

Abstract

In this paper, we present an implicit surface reconstruction method with 3D Gaussian Splatting (3DGS), namely 3DGSR, that allows for accurate 3D reconstruction with intricate details while inheriting the high efficiency and rendering quality of 3DGS. The key insight is incorporating an implicit signed distance field (SDF) within 3D Gaussians to enable them to be aligned and jointly optimized. First, we introduce a differentiable SDF-to-opacity transformation function that converts SDF values into corresponding Gaussians' opacities. This function connects the SDF and 3D Gaussians, allowing for unified optimization and enforcing surface constraints on the 3D Gaussians. During learning, optimizing the 3D Gaussians provides supervisory signals for SDF learning, enabling the reconstruction of intricate details. However, this only provides sparse supervisory signals to the SDF at locations occupied by Gaussians, which is insufficient for learning a continuous SDF. Then, to address this limitation, we incorporate volumetric rendering and align the rendered geometric attributes (depth, normal) with those derived from 3D Gaussians. This consistency regularization introduces supervisory signals to locations not covered by discrete 3D Gaussians, effectively eliminating redundant surfaces outside the Gaussian sampling range. Our extensive experimental results demonstrate that our 3DGSR method enables high-quality 3D surface reconstruction while preserving the efficiency and rendering quality of 3DGS. Besides, our method competes favorably with leading surface reconstruction techniques while offering a more efficient learning process and much better rendering qualities. The code will be available at https://github.com/CVMI-Lab/3DGSR.

Pipeline approach reconstructs implicit surfaces using SDF fields and 3D Gaussians with image supervision.

Overview

  • 3DGSR is a method that merges implicit SDFs with 3D Gaussian Splatting for detailed 3D surface reconstruction, ensuring high-quality rendering and geometric accuracy.

  • The method introduces a differentiable SDF-to-opacity transformation, enabling detailed surface reconstructions by guiding the placement and shape of Gaussians with SDF values.

  • It uses volumetric rendering for SDF optimization, employing a consistency regularization to reduce artefacts and improve the overall quality of the reconstructed surface.

  • Through extensive evaluations, 3DGSR outperforms existing surface reconstruction techniques, demonstrating its potential in virtual reality, 3D printing, and digital heritage preservation.

3DGSR: Enhanced Surface Reconstruction via Implicit SDF and 3D Gaussian Splatting

Introduction to 3DGSR

The paper introduces 3DGSR, a method that utilizes implicit Signed Distance Fields (SDFs) within a 3D Gaussian Splatting (3DGS) framework to enable detailed and accurate 3D surface reconstruction. This method inherits the efficiency and high-quality rendering capabilities of 3DGS while integrating a novel differentiable SDF-to-opacity conversion, aimed at aligning implicit SDFs with 3D Gaussians for joint optimization. The key contributions include a method for effectively connecting SDF and 3D Gaussians, a strategy to provide dense supervisory signals for continuous SDF learning, and extensive evaluations demonstrating superior performance in both reconstruction quality and rendering efficiency.

Core Components and Methodology

  • Differentiable SDF-to-Opacity Transformation: An important innovation in 3DGSR is transforming SDF values into Gaussian opacities, enabling a unified optimization framework that aligns SDFs with 3D Gaussians. This approach allows the SDFs to guide the placement and shape of Gaussians, facilitating detailed surface reconstruction.
  • Volumetric Rendering for SDF Optimization: To overcome the sparse supervision provided by optimizing 3D Gaussians alone, 3DGSR incorporates volumetric rendering derived from the SDF. This step introduces consistency regularization, comparing depth and normals rendered from 3D Gaussians against those from the volumetric rendering, effectively optimizing the SDF over the entire space and reducing artefacts in areas not directly covered by Gaussians.
  • Experimental Validation: The method is rigorously evaluated against leading surface reconstruction techniques on diverse datasets. The results show that 3DGSR outperforms existing methods in terms of reconstruction quality, as elucidated by a lower Chamfer distance and higher F1 scores, while maintaining competitive rendering performance.

Implications and Future Directions

  • Practical Significance: 3DGSR presents a balanced approach to high-quality 3D surface reconstruction and efficient rendering, suitable for applications in virtual reality, 3D printing, and digital heritage preservation where both accurate geometrical details and visual quality are critical.
  • Theoretical Contributions: The research addresses the longstanding challenge of integrating SDF-based surface definition with point-based rendering techniques, offering a novel perspective that leverages the strengths of both to achieve superior results.
  • Future Research: The paper opens avenues for further exploration into combining implicit geometric representations with other rendering techniques, optimization strategies for the SDF-to-opacity conversion to accommodate varying geometric complexities, and the development of more sophisticated models for handling dynamic scenes.

Conclusion

3DGSR represents a significant advancement in the field of 3D surface reconstruction, providing a method that combines the detailed geometric representation capabilities of implicit SDF with the efficient rendering qualities of 3D Gaussian Splatting. Through its innovative approach to coupling these two components and the comprehensive evaluation against state-of-the-art methods, the paper sets a new benchmark for future research in the domain.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

References
  1. Neural point-based graphics. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16. Springer, 696–712.
  2. PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28, 3 (2009), 24.
  3. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5855–5864.
  4. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. ICCV (2023).
  5. High-quality surface splatting on today’s GPUs. In Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, 2005. IEEE, 17–141.
  6. A probabilistic framework for space carving. In Proceedings eighth IEEE international conference on computer vision. ICCV 2001, Vol. 1. IEEE, 388–393.
  7. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16123–16133.
  8. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision. Springer, 333–350.
  9. NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance
  10. Neurbf: A neural fields representation with adaptive radial basis functions. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4182–4194.
  11. Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 154–164.
  12. Improving neural implicit surfaces geometry with patch warping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6260–6269.
  13. Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruction
  14. Jeremy S De Bonet and Paul Viola. 1999. Poxels: Probabilistic voxelized volume reconstruction. In Proceedings of International Conference on Computer Vision (ICCV), Vol. 2. 2.
  15. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5501–5510.
  16. Gipuma: Massively parallel multi-view stereo reconstruction. Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e. V 25, 361-369 (2016), 2.
  17. Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing
  18. Implicit Geometric Regularization for Learning Shapes. In Proceedings of Machine Learning and Systems 2020. 3569–3579.
  19. Markus Gross and Hanspeter Pfister. 2011. Point-based graphics. Elsevier.
  20. Jeffrey P Grossman and William J Dally. 1998. Point sample rendering. In Rendering Techniques’ 98: Proceedings of the Eurographics Workshop in Vienna, Austria, June 29—July 1, 1998 9. Springer, 181–192.
  21. SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
  22. Tri-miprf: Tri-mip representation for efficient anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 19774–19783.
  23. SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
  24. Large scale multi-view stereopsis evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 406–413.
  25. Sdfdiff: Differentiable rendering of signed distance fields for 3d shape optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1251–1261.
  26. Deformable 3D Gaussian Splatting for Animatable Human Avatars
  27. Relu fields: The little non-linearity that could. In ACM SIGGRAPH 2022 Conference Proceedings. 1–9.
  28. Michael Kazhdan and Hugues Hoppe. 2013. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG) 32, 3 (2013), 1–13.
  29. Neural lumigraph rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4287–4297.
  30. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics 42, 4 (July 2023). https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  31. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG) 36, 4 (2017), 1–13.
  32. Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis
  33. Neuralangelo: High-Fidelity Neural Surface Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8456–8465.
  34. Neural sparse voxel fields. Advances in Neural Information Processing Systems 33 (2020), 15651–15663.
  35. Dist: Rendering deep implicit signed distance function with differentiable sphere tracing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019–2028.
  36. NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images
  37. Neural volumes: learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–14.
  38. Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis
  39. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2021), 99–106.
  40. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG) 41, 4 (2022), 1–15.
  41. Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504–3515.
  42. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. 335–342.
  43. Revealing scenes by inverting structure from motion reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 145–154.
  44. Npbg++: Accelerating neural point-based graphics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15969–15979.
  45. DreamGaussian4D: Generative 4D Gaussian Splatting
  46. Object space EWA surface splatting: A hardware accelerated approach to high quality point rendering. In Computer Graphics Forum, Vol. 21. Wiley Online Library, 461–470.
  47. Johannes Lutz Schönberger and Jan-Michael Frahm. 2016. Structure-from-Motion Revisited. In Conference on Computer Vision and Pattern Recognition (CVPR).
  48. Pixelwise view selection for unstructured multi-view stereo. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14. Springer, 501–518.
  49. Steven M Seitz and Charles R Dyer. 1999. Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision 35 (1999), 151–173.
  50. Scene representation networks: Continuous 3d-structure-aware neural scene representations. Advances in Neural Information Processing Systems 32 (2019).
  51. Robust Multiview Stereopsis. 2010. Accurate, Dense, and Robust Multiview Stereopsis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 32, 8 (2010).
  52. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5459–5469.
  53. DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation
  54. Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement. In International Conference on Computer Vision (ICCV).
  55. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
  56. Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3295–3306.
  57. PET-NeuS: Positional Encoding Triplanes for Neural Surfaces. (2023).
  58. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5438–5448.
  59. Learn to Optimize Denoising Scores for 3D Generation: A Unified and Improved Diffusion Prior on NeRF and 3D Gaussian Splatting
  60. Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction
  61. Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting. In International Conference on Learning Representations (ICLR).
  62. Volume rendering of neural implicit surfaces. Advances in Neural Information Processing Systems 34 (2021), 4805–4815.
  63. BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis. arXiv (2023).
  64. Multiview neural surface reconstruction by disentangling geometry and appearance. Advances in Neural Information Processing Systems 33 (2020), 2492–2502.
  65. Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions
  66. GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models
  67. Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–14.
  68. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5752–5761.
  69. Differentiable point-based radiance fields for efficient view synthesis. In ACM SIGGRAPH Asia 2022 Conference Proceedings. 1–12.
  70. Drivable 3D Gaussian Avatars
  71. Surface Splatting. In ACM Transactions on Graphics (Proc. ACM SIGGRAPH). 371–378.

Show All 71