Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis (2404.12819v1)

Published 19 Apr 2024 in cs.CV

Abstract: Inverse rendering aims to reconstruct the scene properties of objects solely from multiview images. However, it is an ill-posed problem prone to producing ambiguous estimations deviating from physically accurate representations. In this paper, we utilize Neural Microfacet Fields (NMF), a state-of-the-art neural inverse rendering method to illustrate the inherent ambiguity. We propose an evaluation framework to assess the degree of compensation or interaction between the estimated scene properties, aiming to explore the mechanisms behind this ill-posed problem and potential mitigation strategies. Specifically, we introduce artificial perturbations to one scene property and examine how adjusting another property can compensate for these perturbations. To facilitate such experiments, we introduce a disentangled NMF where material properties are independent. The experimental findings underscore the intrinsic ambiguity present in neural inverse rendering and highlight the importance of providing additional guidance through geometry, material, and illumination priors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (34)
  1. Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In European Conference on Computer Vision (ECCV), page 294–311, Berlin, Heidelberg, 2020. Springer-Verlag.
  2. Nerd: Neural reflectance decomposition from image collections. In IEEE International Conference on Computer Vision (ICCV), 2021a.
  3. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. In Advances in Neural Information Processing Systems (NeurIPS), 2021b.
  4. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision (ECCV), 2022.
  5. SHINOBI: Shape and Illumination using Neural Object decomposition via Brdf optimization In-the-wild, 2023.
  6. Factored-neus: Reconstructing surfaces, illumination, and materials of possibly glossy objects, 2023.
  7. Deferred neural lighting: free-viewpoint relighting from unstructured photographs. ACM Transactions on Graphics (TOG), 39(6):258, 2020.
  8. Rotation-equivariant conditional spherical neural fields for learning a natural illumination prior. In Advances in Neural Information Processing Systems, 2022.
  9. Handbook of Image and Video Processing. Academic Press, Inc., USA, 1st edition, 2000.
  10. Object-centric neural scene rendering, 2020.
  11. Nerfren: Neural radiance fields with reflections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18409–18418, 2022.
  12. Tensoir: Tensorial inverse rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  13. Lane: Lighting-aware neural fields for compositional scene synthesis, 2023.
  14. Envidr: Implicit differentiable renderer with neural environment lighting. arXiv preprint arXiv:2303.13022, 2023.
  15. Neural sparse voxel fields. NeurIPS, 2020.
  16. Neural radiance transfer fields for relightable novel-view synthesis with global illumination. In Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XVII, page 153–169, Berlin, Heidelberg, 2022. Springer-Verlag.
  17. Diffusion posterior illumination for ambiguity-aware inverse rendering. ACM Transactions on Graphics, 42(6), 2023.
  18. Neural microfacet fields for inverse rendering, 2023.
  19. N. Max. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2):99–108, 1995.
  20. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  21. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):102:1–102:15, 2022.
  22. Neural lighting simulation for urban scenes. In Advances in Neural Information Processing Systems, pages 19291–19326. Curran Associates, Inc., 2023.
  23. Sharf: Shape-conditioned radiance fields from a single view. In ICML, 2021.
  24. Plenoxels: Radiance fields without neural networks. In CVPR, 2022.
  25. Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In CVPR, 2021.
  26. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In CVPR, 2022a.
  27. Improved direct voxel grid optimization for radiance fields reconstruction, 2022b.
  28. Ref-NeRF: Structured view-dependent appearance for neural radiance fields. CVPR, 2022.
  29. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  30. Ps-nerf: Neural inverse rendering for multi-view photometric stereo. In European Conference on Computer Vision (ECCV), 2022.
  31. PhySG: Inverse rendering with spherical gaussians for physics-based material editing and relighting. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021a.
  32. The unreasonable effectiveness of deep features as a perceptual metric, 2018.
  33. Nerfactor: neural factorization of shape and reflectance under an unknown illumination. ACM Trans. Graph., 40(6), 2021b.
  34. Simbar: Single image-based scene relighting for effective data augmentation for automated driving vision tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3718–3728, 2022.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com