Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering (2306.07632v3)

Published 13 Jun 2023 in cs.CV and cs.GR

Abstract: This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces using pre-integrated rendering from multi-view images or video. Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry, which facilitates the factorization of the radiance field into two components: a spatially varying material field and an all-frequency lighting representation. This factorization, jointly optimized using an adapted differentiable pre-integrated rendering framework with material encoding regularization, in turn addresses the ambiguity of geometry reconstruction and leads to better disentanglement and refinement of each scene property. Additionally, we introduced a method to distil indirect illumination fields from the learned representations, further recovering the complex illumination effect like inter-reflection. Consequently, our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines. Qualitative and quantitative experiments have shown that NeuS-PIR outperforms existing methods across various tasks on both synthetic and real datasets. Source code is available at https://github.com/Sheldonmao/NeuSPIR

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. S. Bi, Z. Xu, K. Sunkavalli, M. Hašan, Y. Hold-Geoffroy, D. Kriegman, and R. Ramamoorthi, “Deep reflectance volumes: Relightable reconstructions from multi-view photometric images,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2020, pp. 294–311.
  2. S. Bi, Z. Xu, P. Srinivasan, B. Mildenhall, K. Sunkavalli, M. Hašan, Y. Hold-Geoffroy, D. Kriegman, and R. Ramamoorthi, “Neural reflectance fields for appearance acquisition,” arXiv preprint arXiv:2008.03824, 2020.
  3. K. Zhang, F. Luan, Z. Li, and N. Snavely, “Iron: Inverse rendering by optimizing neural sdfs and materials from photometric images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 5565–5574.
  4. J. Munkberg, J. Hasselgren, T. Shen, J. Gao, W. Chen, A. Evans, T. Müller, and S. Fidler, “Extracting triangular 3d models, materials, and lighting from images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 8280–8290.
  5. P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” in Proc. of the Conference on Neural Information Processing Systems (NeurIPS), 2021.
  6. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2020.
  7. X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron, “Nerfactor: Neural factorization of shape and reflectance under an unknown illumination,” ACM Trans. on Graphics (TOG), vol. 40, no. 6, pp. 1–18, 2021.
  8. M. Boss, V. Jampani, R. Braun, C. Liu, J. Barron, and H. Lensch, “Neural-pil: Neural pre-integrated lighting for reflectance decomposition,” in Proc. of the Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 10 691–10 704.
  9. P. P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron, “Nerv: Neural reflectance and visibility fields for relighting and view synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 7495–7504.
  10. M. Oechsle, S. Peng, and A. Geiger, “Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 5589–5599.
  11. L. Yariv, J. Gu, Y. Kasten, and Y. Lipman, “Volume rendering of neural implicit surfaces,” in Proc. of the Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 4805–4815.
  12. M. Boss, R. Braun, V. Jampani, J. T. Barron, C. Liu, and H. P. Lensch, “Nerd: Neural reflectance decomposition from image collections,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2021.
  13. Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan, “Mvsnet: Depth inference for unstructured multi-view stereo,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 767–783.
  14. Z. Shen, Y. Dai, X. Song, Z. Rao, D. Zhou, and L. Zhang, “Pcw-net: Pyramid combination and warping cost volume for stereo matching,” in Proc. Eur. Conf. Comput. Vis. (ECCV).   Springer, 2022, pp. 280–297.
  15. W. Su, Q. Xu, and W. Tao, “Uncertainty guided multi-view stereo network for depth estimation,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 11, pp. 7796–7808, 2022.
  16. H. Zhang, X. Ye, S. Chen, Z. Wang, H. Li, and W. Ouyang, “The farther the better: Balanced stereo matching via depth-based sampling and adaptive feature refinement,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 7, pp. 4613–4625, 2021.
  17. H. Dai, X. Zhang, Y. Zhao, H. Sun, and N. Zheng, “Adaptive disparity candidates prediction network for efficient real-time stereo matching,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 5, pp. 3099–3110, 2022.
  18. Y. Lee and H. Kim, “A high-throughput depth estimation processor for accurate semiglobal stereo matching using pipelined inter-pixel aggregation,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 1, pp. 411–422, 2022.
  19. S. Liu, T. Li, W. Chen, and H. Li, “Soft rasterizer: A differentiable renderer for image-based 3d reasoning,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2019, pp. 7708–7717.
  20. Y. Liao, S. Donne, and A. Geiger, “Deep marching cubes: Learning explicit surface representations,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 2916–2925.
  21. C. Häne, S. Tulsiani, and J. Malik, “Hierarchical surface prediction for 3d object reconstruction,” in Proc. of the Intl. Conf. on 3D Vision (3DV).   IEEE, 2017, pp. 412–420.
  22. T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry, “A papier-mâché approach to learning 3d surface generation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 216–224.
  23. T. Shen, J. Gao, K. Yin, M.-Y. Liu, and S. Fidler, “Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis,” in Proc. of the Conference on Neural Information Processing Systems (NeurIPS), vol. 34, 2021, pp. 6087–6101.
  24. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2021, pp. 5855–5864.
  25. Y. Wang, Q. Han, M. Habermann, K. Daniilidis, C. Theobalt, and L. Liu, “Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2022.
  26. X. Long, C. Lin, P. Wang, T. Komura, and W. Wang, “Sparseneus: Fast generalizable neural surface reconstruction from sparse views,” in Proc. Eur. Conf. Comput. Vis. (ECCV).   Springer, 2022.
  27. C. Zeng, G. Chen, Y. Dong, P. Peers, H. Wu, and X. Tong, “Relighting neural radiance fields with shadow and highlight hints,” in ACM SIGGRAPH 2023 Conference Proceedings, 2023.
  28. Z. Wang, W. Chen, D. Acuna, J. Kautz, and S. Fidler, “Neural light field estimation for street scenes with differentiable virtual object insertion,” in Proc. Eur. Conf. Comput. Vis. (ECCV).   Springer, 2022, pp. 380–397.
  29. J. Han, S. Hong, and M. G. Kang, “Canonical illumination decomposition and its applications,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 4158–4170, 2020.
  30. G. Zhang, Z. Luo, Y. Chen, Y. Zheng, and W. Lin, “Illumination unification for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, pp. 6766–6777, 2022.
  31. D. Gao, G. Chen, Y. Dong, P. Peers, K. Xu, and X. Tong, “Deferred neural lighting: free-viewpoint relighting from unstructured photographs,” ACM Trans. on Graphics (TOG), vol. 39, no. 6, pp. 1–15, 2020.
  32. F. Luan, S. Zhao, K. Bala, and Z. Dong, “Unified shape and svbrdf recovery using differentiable monte carlo rendering,” in Computer Graphics Forum, vol. 40, no. 4.   Wiley Online Library, 2021, pp. 101–113.
  33. K. Zhang, F. Luan, Q. Wang, K. Bala, and N. Snavely, “Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 5453–5462.
  34. D. Verbin, P. Hedman, B. Mildenhall, T. Zickler, J. T. Barron, and P. P. Srinivasan, “Ref-nerf: Structured view-dependent appearance for neural radiance fields,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).   IEEE, 2022, pp. 5481–5490.
  35. Y. Yao, J. Zhang, J. Liu, Y. Qu, T. Fang, D. McKinnon, Y. Tsin, and L. Quan, “Neilf: Neural incident light field for physically-based material estimation,” in Proc. Eur. Conf. Comput. Vis. (ECCV).   Springer, 2022, pp. 700–716.
  36. J. Zhang, Y. Yao, S. Li, J. Liu, T. Fang, D. McKinnon, Y. Tsin, and L. Quan, “Neilf++: Inter-reflectable light fields for geometry and material estimation,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2023.
  37. J. Hasselgren, N. Hofmann, and J. Munkberg, “Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising,” in Proc. of the Conference on Neural Information Processing Systems (NeurIPS), 2022.
  38. L. Lin, J. Zhu, and Y. Zhang, “Multiview textured mesh recovery by differentiable rendering,” IEEE Trans. Circuits Syst. Video Technol., vol. 33, no. 4, pp. 1684–1696, 2023.
  39. H. Jin, I. Liu, P. Xu, X. Zhang, S. Han, S. Bi, X. Zhou, Z. Xu, and H. Su, “Tensoir: Tensorial inverse rendering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023, pp. 165–174.
  40. R. Liang, H. Chen, C. Li, F. Chen, S. Panneer, and N. Vijaykumar, “Envidr: Implicit differentiable renderer with neural environment lighting,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2023.
  41. Y. Li, Q. Hu, Z. Ouyang, and S. Shen, “Neural reflectance decomposition under dynamic point light,” IEEE Trans. Circuits Syst. Video Technol., pp. 1–1, 2023.
  42. Y. Liu, P. Wang, C. Lin, X. Long, J. Wang, L. Liu, T. Komura, and W. Wang, “Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images,” in SIGGRAPH, 2023.
  43. C. Sun, G. Cai, Z. Li, K. Yan, C. Zhang, C. Marshall, J.-B. Huang, S. Zhao, and Z. Dong, “Neural-pbir reconstruction of shape, material, and illumination,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2023, pp. 18 046–18 056.
  44. B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,” ACM Trans. on Graphics (TOG), vol. 42, no. 4, 2023.
  45. Y. Jiang, J. Tu, Y. Liu, X. Gao, X. Long, W. Wang, and Y. Ma, “Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2024.
  46. J. Gao, C. Gu, Y. Lin, H. Zhu, X. Cao, L. Zhang, and Y. Yao, “Relightable 3d gaussian: Real-time point cloud relighting with brdf decomposition and ray tracing,” arXiv preprint arXiv:2311.16043, 2023.
  47. Y. Shi, Y. Wu, C. Wu, X. Liu, C. Zhao, H. Feng, J. Liu, L. Zhang, J. Zhang, B. Zhou et al., “Gir: 3d gaussian inverse rendering for relightable scene factorization,” arXiv preprint arXiv:2312.05133, 2023.
  48. Z. Liang, Q. Zhang, Y. Feng, Y. Shan, and K. Jia, “Gs-ir: 3d gaussian splatting for inverse rendering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2024.
  49. B. Karis and E. Games, “Real shading in unreal engine 4,” Proc. Physically Based Shading Theory Practice, vol. 4, no. 3, p. 1, 2013.
  50. R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACM Trans. on Graphics (TOG), vol. 1, no. 1, pp. 7–24, 1982.
  51. B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, “Microfacet models for refraction through rough surfaces,” in Proceedings of the 18th Eurographics conference on Rendering Techniques, 2007, pp. 195–206.
  52. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Trans. on Graphics (TOG), vol. 41, no. 4, pp. 1–15, 2022.
  53. Y. Zhang, J. Sun, X. He, H. Fu, R. Jia, and X. Zhou, “Modeling indirect illumination for inverse rendering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022.
  54. M. Boss, A. Engelhardt, A. Kar, Y. Li, D. Sun, J. T. Barron, H. P. Lensch, and V. Jampani, “SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections,” in Advances in Neural Information Processing Systems (NeurIPS), 2022.
  55. R. Li, M. Tancik, and A. Kanazawa, “Nerfacc: A general nerf accleration toolbox.” arXiv preprint arXiv:2210.04847, 2022.
  56. E. Insafutdinov, D. Campbell, J. F. Henriques, and A. Vedaldi, “Snes: Learning probably symmetric neural surfaces from incomplete data,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings.   Springer, 2022, pp. 367–383.
  57. J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, “Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction,” in Proc. of the IEEE/CVF Intl. Conf. on Computer Vision (ICCV), 2021, pp. 10 901–10 911.
  58. E. Chernyaev, “Marching cubes 33: Construction of topologically correct isosurfaces,” CERN, Tech. Rep., 1995.
  59. Z. Chen and H. Zhang, “Neural marching cubes,” ACM Trans. on Graphics (TOG), vol. 40, no. 6, pp. 1–15, 2021.
  60. Z. Chen, A. Tagliasacchi, T. Funkhouser, and H. Zhang, “Neural dual contouring,” ACM Trans. on Graphics (TOG), vol. 41, no. 4, pp. 1–13, 2022.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets