Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeRF Revisited: Fixing Quadrature Instability in Volume Rendering (2310.20685v2)

Published 31 Oct 2023 in cs.CV

Abstract: Neural radiance fields (NeRF) rely on volume rendering to synthesize novel views. Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum that corresponds to the exact integral along the ray under piecewise constant volume density. As a consequence, the rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub quadrature instability. We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density. This simultaneously resolves multiple issues: conflicts between samples along different rays, imprecise hierarchical sampling, and non-differentiability of quantiles of ray termination distances w.r.t. model parameters. We demonstrate several benefits over the classical sample-based rendering equation, such as sharper textures, better geometric reconstruction, and stronger depth supervision. Our proposed formulation can be also be used as a drop-in replacement to the volume rendering equation of existing NeRF-based methods. Our project page can be found at pl-nerf.github.io.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Large-scale data for multiple-view stereopsis. International Journal of Computer Vision, pages 1–16, 2016.
  2. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In ICCV, 2021.
  3. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022.
  4. Zip-nerf: Anti-aliased grid-based neural radiance fields. ICCV, 2023.
  5. Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In CVPR, 2023.
  6. Depth-supervised nerf: Fewer views and faster training for free. In CVPR, pages 12882–12891, 2022.
  7. Volume rendering. SIGGRAPH, 22(4):65–74, 1988.
  8. Baking neural radiance fields for real-time view synthesis. In ICCV, pages 5875–5884, 2021.
  9. Differentiable rendering: A survey. ArXiv, abs/2006.12057, 2020.
  10. Marc Levoy. Efficient ray tracing of volume data. ACM Transactions on Graphics (TOG), 9(3):245–261, 1990.
  11. Neural scene flow fields for space-time view synthesis of dynamic scenes. In CVPR, pages 6498–6508, 2021.
  12. Neural sparse voxel fields. In NeurIPS, 2020.
  13. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In CVPR, 2021.
  14. Nelson Max. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2):99–108, 1995.
  15. Local and global illumination in the volume rendering integral. In Hans Hagen, editor, Scientific Visualization: Advanced Concepts, volume 1 of Dagstuhl Follow-Ups, pages 259–274, Dagstuhl, Germany, 2010. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
  16. NeRF in the dark: High dynamic range view synthesis from noisy raw images. In CVPR, 2022.
  17. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics, 2019.
  18. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  19. Nerfies: Deformable neural radiance fields. In ICCV, 2021.
  20. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. ACM Transactions on Graphics, 40(6), dec 2021.
  21. A 3-dimensional representation for fast rendering of complex scenes. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pages 110–116, 1980.
  22. Hanan Samet. The design and analysis of spatial data structures, volume 85. Addison-wesley Reading, MA, 1990.
  23. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In CVPR, 2022.
  24. Improved direct voxel grid optimization for radiance fields reconstruction. arXiv preprint arXiv:2206.05085, 2022.
  25. Block-nerf: Scalable large scene neural view synthesis. In CVPR, 2022.
  26. Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS, 33:7537–7547, 2020.
  27. Nerfstudio: A modular framework for neural radiance field development. In ACM SIGGRAPH 2023 Conference Proceedings, 2023.
  28. Scade: Nerfs from space carving with ambiguity-aware depth estimates. In CVPR, 2023.
  29. Attention is all you need. In NeurIPS, 2017.
  30. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In NeurIPS, 2021.
  31. Ibrnet: Learning multi-view image-based rendering. In ICCV, 2021.
  32. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  33. Lee Westover. Interactive volume rendering. In Proceedings of the 1989 Chapel Hill workshop on Volume visualization, pages 9–16, 1989.
  34. A volume density optical model. pages 61–68, 1992.
  35. A high accuracy volume renderer for unstructured data. IEEE Transactions on Visualization and Computer Graphics, 4(1):37–54, 1998.
  36. Diver: Real-time and accurate neural radiance fields with deterministic integration for volume rendering. In CVPR, 2022.
  37. Bungeenerf: Progressive neural radiance field for extreme multi-scale scene rendering. In ECCV, 2022.
  38. Lin Yen-Chen. Nerf-pytorch. https://github.com/yenchenlin/nerf-pytorch/, 2020.
  39. pixelnerf: Neural radiance fields from one or few images. In CVPR, pages 4578–4587, 2021.
  40. Nerf++: Analyzing and improving neural radiance fields. ArXiv, abs/2010.07492, 2020.
  41. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pages 586–595, 2018.
Citations (8)

Summary

We haven't generated a summary for this paper yet.