DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading (2404.09412v2)
Abstract: Reconstructing and editing 3D objects and scenes both play crucial roles in computer graphics and computer vision. Neural radiance fields (NeRFs) can achieve realistic reconstruction and editing results but suffer from inefficiency in rendering. Gaussian splatting significantly accelerates rendering by rasterizing Gaussian ellipsoids. However, Gaussian splatting utilizes a single Spherical Harmonic (SH) function to model both texture and lighting, limiting independent editing capabilities of these components. Recently, attempts have been made to decouple texture and lighting with the Gaussian splatting representation but may fail to produce plausible geometry and decomposition results on reflective scenes. Additionally, the forward shading technique they employ introduces noticeable blending artifacts during relighting, as the geometry attributes of Gaussians are optimized under the original illumination and may not be suitable for novel lighting conditions. To address these issues, we introduce DeferredGS, a method for decoupling and editing the Gaussian splatting representation using deferred shading. To achieve successful decoupling, we model the illumination with a learnable environment map and define additional attributes such as texture parameters and normal direction on Gaussians, where the normal is distilled from a jointly trained signed distance function. More importantly, we apply deferred shading, resulting in more realistic relighting effects compared to previous methods. Both qualitative and quantitative experiments demonstrate the superior performance of DeferredGS in novel view synthesis and editing tasks.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” in ECCV, 2020, pp. 405–421.
- C. Bao, B. Yang, Z. Junyi, B. Hujun, Z. Yinda, C. Zhaopeng, and Z. Guofeng, “Neumesh: Learning disentangled neural mesh-based implicit field for geometry and texture editing,” in ECCV, 2022, pp. 597–614.
- F. Xiang, Z. Xu, M. Hasan, Y. Hold-Geoffroy, K. Sunkavalli, and H. Su, “Neutex: Neural texture mapping for volumetric neural rendering,” in CVPR, 2021, pp. 7119–7128.
- S. Liu, X. Zhang, Z. Zhang, R. Zhang, J.-Y. Zhu, and B. Russell, “Editing conditional radiance fields,” in ICCV, 2021, pp. 5773–5783.
- T. Wu, J. Sun, Y. Lai, and L. Gao, “De-nerf: Decoupled neural radiance fields for view-consistent appearance editing and high-frequency environmental relighting,” in ACM SIGGRAPH 2023 Conference Proceedings. ACM, 2023, pp. 74:1–74:11.
- B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,” ACM Transactions on Graphics, vol. 42, no. 4, 2023.
- Y. Chen, Z. Chen, C. Zhang, F. Wang, X. Yang, Y. Wang, Z. Cai, L. Yang, H. Liu, and G. Lin, “Gaussianeditor: Swift and controllable 3d editing with gaussian splatting,” 2023.
- J. Fang, J. Wang, X. Zhang, L. Xie, and Q. Tian, “Gaussianeditor: Editing 3d gaussians delicately with text instructions,” 2023.
- A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick, “Segment anything,” arXiv:2304.02643, 2023.
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in CVPR, 2022, pp. 10 674–10 685.
- A. Haque, M. Tancik, A. Efros, A. Holynski, and A. Kanazawa, “Instruct-nerf2nerf: Editing 3d scenes with instructions,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.
- J. Gao, C. Gu, Y. Lin, H. Zhu, X. Cao, L. Zhang, and Y. Yao, “Relightable 3d gaussian: Real-time point cloud relighting with BRDF decomposition and ray tracing,” CoRR, vol. abs/2311.16043, 2023.
- Z. Liang, Q. Zhang, Y. Feng, Y. Shan, and K. Jia, “GS-IR: 3d gaussian splatting for inverse rendering,” CoRR, vol. abs/2311.16473, 2023.
- Y. Jiang, J. Tu, Y. Liu, X. Gao, X. Long, W. Wang, and Y. Ma, “Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces,” CoRR, vol. abs/2311.17977, 2023.
- Y. Shi, Y. Wu, C. Wu, X. Liu, C. Zhao, H. Feng, J. Liu, L. Zhang, J. Zhang, B. Zhou, E. Ding, and J. Wang, “Gir: 3d gaussian inverse rendering for relightable scene factorization,” Arxiv, vol. abs/2312.05133, 2023.
- L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in CVPR, 2019, pp. 4460–4470.
- Z. Chen and H. Zhang, “Learning implicit fields for generative shape modeling,” in CVPR, 2019, pp. 5939–5948.
- J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in CVPR, 2019, pp. 165–174.
- M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger, “Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision,” in CVPR, 2020, pp. 3501–3512.
- L. Liu, J. Gu, K. Zaw Lin, T.-S. Chua, and C. Theobalt, “Neural sparse voxel fields,” Advances in Neural Information Processing Systems, pp. 15 651–15 663, 2020.
- C. Sun, M. Sun, and H. Chen, “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” in CVPR, 2022, pp. 5449–5459.
- A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, “Tensorf: Tensorial radiance fields,” in ECCV, vol. 13692, 2022, pp. 333–350.
- T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Trans. Graph., vol. 41, no. 4, pp. 102:1–102:15, 2022.
- Z. Chen, T. Funkhouser, P. Hedman, and A. Tagliasacchi, “Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures,” in CVPR, 2023.
- L. Yariv, P. Hedman, C. Reiser, D. Verbin, P. P. Srinivasan, R. Szeliski, J. T. Barron, and B. Mildenhall, “Bakedsdf: Meshing neural sdfs for real-time view synthesis,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023, pp. 46:1–46:9.
- M. Oechsle, S. Peng, and A. Geiger, “Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction,” in ICCV, 2021, pp. 5589–5599.
- P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” in Advances in Neural Information Processing Systems, 2021, pp. 27 171–27 183.
- L. Yariv, J. Gu, Y. Kasten, and Y. Lipman, “Volume rendering of neural implicit surfaces,” in Advances in Neural Information Processing Systems, 2021, pp. 4805–4815.
- Y. Liu, L. Wang, J. Yang, W. Chen, X. Meng, B. Yang, and L. Gao, “Neudf: Leaning neural unsigned distance fields with volume rendering,” in CVPR, 2023, pp. 237–247.
- X. Long, C. Lin, L. Liu, Y. Liu, P. Wang, C. Theobalt, T. Komura, and W. Wang, “Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies,” in CVPR, 2023, pp. 20 834–20 843.
- X. Meng, W. Chen, and B. Yang, “Neat: Learning neural implicit surfaces with arbitrary topologies from multi-view images,” in CVPR, 2023, pp. 248–258.
- Y. Fan, I. Skorokhodov, O. Voynov, S. Ignatyev, E. Burnaev, P. Wonka, and Y. Wang, “Factored-neus: Reconstructing surfaces, illumination, and materials of possibly glossy objects,” CoRR, vol. abs/2305.17929, 2023.
- W. Ge, T. Hu, H. Zhao, S. Liu, and Y.-C. Chen, “Ref-neus: Ambiguity-reduced neural implicit surface learning for multi-view reconstruction with reflection,” arXiv preprint arXiv:2303.10840, 2023.
- D. Verbin, P. Hedman, B. Mildenhall, T. E. Zickler, J. T. Barron, and P. P. Srinivasan, “Ref-nerf: Structured view-dependent appearance for neural radiance fields,” in CVPR, 2022, pp. 5481–5490.
- H. Chen, C. Li, and G. H. Lee, “Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance,” CoRR, vol. abs/2312.00846, 2023.
- A. Guédon and V. Lepetit, “Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering,” CoRR, vol. abs/2311.12775, 2023.
- P. P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron, “NeRV: Neural reflectance and visibility fields for relighting and view synthesis,” in CVPR, 2021, pp. 7495–7504.
- M. Boss, R. Braun, V. Jampani, J. T. Barron, C. Liu, and H. Lensch, “NeRD: Neural reflectance decomposition from image collections,” in ICCV, 2021, pp. 12 684–12 694.
- K. Zhang, F. Luan, Q. Wang, K. Bala, and N. Snavely, “PhySG: Inverse rendering with spherical gaussians for physics-based material editing and relighting,” in CVPR, 2021, pp. 5453–5462.
- Y. Zhang, J. Sun, X. He, H. Fu, R. Jia, and X. Zhou, “Modeling indirect illumination for inverse rendering,” in CVPR, 2022, pp. 18 622–18 631.
- Z. Kuang, K. Olszewski, M. Chai, Z. Huang, P. Achlioptas, and S. Tulyakov, “NeROIC: Neural object capture and rendering from online image collections,” Computing Research Repository (CoRR), vol. abs/2201.02533, 2022.
- X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron, “NeRFactor: Neural factorization of shape and reflectance under an unknown illumination,” ACM Trans. Graph., vol. 40, no. 6, pp. 1–18, 2021.
- J. Munkberg, W. Chen, J. Hasselgren, A. Evans, T. Shen, T. Müller, J. Gao, and S. Fidler, “Extracting triangular 3d models, materials, and lighting from images,” in CVPR, 2022, pp. 8270–8280.
- J. Hasselgren, N. Hofmann, and J. Munkberg, “Shape, light, and material decomposition from images using monte carlo rendering and denoising,” in Advances in Neural Information Processing Systems, 2022.
- T. Shen, J. Gao, K. Yin, M.-Y. Liu, and S. Fidler, “Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis,” in Advances in Neural Information Processing Systems, 2021, pp. 6087–6101.
- A. Mai, D. Verbin, F. Kuester, and S. Fridovich-Keil, “Neural microfacet fields for inverse rendering,” 2023.
- H. Jin, I. Liu, P. Xu, X. Zhang, S. Han, S. Bi, X. Zhou, Z. Xu, and H. Su, “Tensoir: Tensorial inverse rendering,” in CVPR, 2023.
- Y. Liu, P. Wang, C. Lin, X. Long, J. Wang, L. Liu, T. Komura, and W. Wang, “Nero: Neural geometry and BRDF reconstruction of reflective objects from multiview images,” ACM Trans. Graph., vol. 42, no. 4, pp. 114:1–114:22, 2023.
- V. Rudnev, M. Elgharib, W. Smith, L. Liu, V. Golyanik, and C. Theobalt, “Nerf for outdoor scene relighting,” in ECCV, 2022.
- Z. Wang, T. Shen, J. Gao, S. Huang, J. Munkberg, J. Hasselgren, Z. Gojcic, W. Chen, and S. Fidler, “Neural fields meet explicit geometric representations for inverse rendering of urban scenes,” in CVPR, 2023, pp. 8370–8380.
- J. Sun, T. Wu, Y. Yang, Y. Lai, and L. Gao, “Sol-nerf: Sunlight modeling for outdoor scene decomposition and relighting,” in SIGGRAPH Asia 2023 Conference Papers. ACM, 2023, pp. 31:1–31:11.
- J. T. Kajiya, “The rendering equation,” in Proceedings of the 13th annual conference on Computer graphics and interactive techniques, 1986, pp. 143–150.
- B. Burley and W. D. A. Studios, “Physically-based shading at disney,” in ACM SIGGRAPH, 2012, pp. 1–7.
- B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, “Microfacet models for refraction through rough surfaces,” in Eurographics conference on Rendering Techniques, 2007, pp. 195–206.
- B. Karis, “Real shading in unreal engine 4,” 2013.
- O. Sorkine-Hornung and M. Alexa, “As-rigid-as-possible surface modeling,” in Symposium on Geometry Processing, 2007.
- Z. Kuang, Y. Zhang, H. Yu, S. Agarwala, S. Wu, and J. Wu, “Stanford-orb: A real-world 3d object inverse rendering benchmark,” in NeurIPS 2023, 2023.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
- R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.