Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Estimating 3D Uncertainty Field: Quantifying Uncertainty for Neural Radiance Fields (2311.01815v2)

Published 3 Nov 2023 in cs.CV and cs.RO

Abstract: Current methods based on Neural Radiance Fields (NeRF) significantly lack the capacity to quantify uncertainty in their predictions, particularly on the unseen space including the occluded and outside scene content. This limitation hinders their extensive applications in robotics, where the reliability of model predictions has to be considered for tasks such as robotic exploration and planning in unknown environments. To address this, we propose a novel approach to estimate a 3D Uncertainty Field based on the learned incomplete scene geometry, which explicitly identifies these unseen regions. By considering the accumulated transmittance along each camera ray, our Uncertainty Field infers 2D pixel-wise uncertainty, exhibiting high values for rays directly casting towards occluded or outside the scene content. To quantify the uncertainty on the learned surface, we model a stochastic radiance field. Our experiments demonstrate that our approach is the only one that can explicitly reason about high uncertainty both on 3D unseen regions and its involved 2D rendered pixels, compared with recent methods. Furthermore, we illustrate that our designed uncertainty field is ideally suited for real-world robotics tasks, such as next-best-view selection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in ECCV, 2020.
  2. A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer, “D-NeRF: Neural radiance fields for dynamic scenes,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  3. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelnerf: Neural radiance fields from one or few images,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4576–4585, 2020.
  4. Y.-J. Yuan, Y. tian Sun, Y.-K. Lai, Y. Ma, R. Jia, and L. Gao, “Nerf-editing: Geometry editing of neural radiance fields,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18 332–18 343, 2022.
  5. N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman, “Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22 500–22 510, 2022.
  6. Y.-C. Lin, P. R. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y. Lin, “inerf: Inverting neural radiance fields for pose estimation,” 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1323–1330, 2020.
  7. Y. Ran, J. Zeng, S. He, L. Li, Y. Chen, G. H. Lee, J. Chen, and Q. Ye, “Neurar: Neural uncertainty for autonomous 3d reconstruction,” ArXiv, vol. abs/2207.10985, 2022.
  8. D. Maggio, M. Abate, J. Shi, C. Mario, and L. Carlone, “Loc-nerf: Monte carlo localization using neural radiance fields,” 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 4018–4025, 2022.
  9. J. Ichnowski*, Y. Avigal*, J. Kerr, and K. Goldberg, “Dex-NeRF: Using a neural radiance field to grasp transparent objects,” in Conference on Robot Learning (CoRL), 2020.
  10. Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys, “Nice-slam: Neural implicit scalable encoding for slam,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12 776–12 786, 2021.
  11. M. Adamkiewicz, T. Chen, A. Caccavale, R. Gardner, P. Culbertson, J. Bohg, and M. Schwager, “Vision-only robot navigation in a neural radiance world,” IEEE Robotics and Automation Letters, vol. PP, pp. 1–1, 2021.
  12. L. Yen-Chen, P. R. Florence, J. T. Barron, T.-Y. Lin, A. Rodriguez, and P. Isola, “Nerf-supervision: Learning dense object descriptors from neural radiance fields,” 2022 International Conference on Robotics and Automation (ICRA), pp. 6496–6503, 2022.
  13. B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in NIPS, 2017.
  14. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in ICML, 2016.
  15. J. Shen, A. Ruiz, A. Agudo, and F. Moreno-Noguer, “Stochastic neural radiance fields: Quantifying uncertainty in implicit 3d representations,” in 3DV, 2021.
  16. J. Shen, A. Agudo, F. Moreno-Noguer, and A. Ruiz, “Conditional-flow nerf: Accurate 3d modelling with reliable uncertainty quantification,” in ECCV, 2022.
  17. N. Sünderhauf, J. Abou-Chakra, and D. Miller, “Density-aware nerf ensembles: Quantifying predictive uncertainty in neural radiance fields,” 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023.
  18. C. Sun, M. Sun, and H.-T. Chen, “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” in CVPR, 2022.
  19. A. Jain, M. Tancik, and P. Abbeel, “Putting nerf on a diet: Semantically consistent few-shot view synthesis,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5865–5874, 2021.
  20. E. Sucar, S. Liu, J. Ortiz, and A. J. Davison, “imap: Implicit mapping and positioning in real-time,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6209–6218, 2021.
  21. C.-H. Lin, J. Gao, L. Tang, T. Takikawa, X. Zeng, X. Huang, K. Kreis, S. Fidler, M.-Y. Liu, and T.-Y. Lin, “Magic3d: High-resolution text-to-3d content creation,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 300–309, 2022.
  22. Z. Liu, Y. Feng, M. J. Black, D. Nowrouzezahrai, L. Paull, and W. yu Liu, “Meshdiffusion: Score-based generative 3d mesh modeling,” ArXiv, vol. abs/2303.08133, 2023.
  23. Y. Guo, K. Chen, S. Liang, Y. Liu, H. Bao, and J. Zhang, “Ad-nerf: Audio driven neural radiance fields for talking head synthesis,” 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5764–5774, 2021.
  24. A. Yu, S. Fridovich-Keil, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks,” in CVPR, 2022.
  25. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics (TOG), vol. 41, pp. 1 – 15, 2022.
  26. A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, “Tensorf: Tensorial radiance fields,” in ECCV, 2022.
  27. M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. W. Fieguth, X. Cao, A. Khosravi, U. R. Acharya, V. Makarenkov, and S. Nahavandi, “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Inf. Fusion, vol. 76, pp. 243–297, 2020.
  28. A. Kendall and R. Cipolla, “Modelling uncertainty in deep learning for camera relocalization,” 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4762–4769, 2015.
  29. A. Loquercio, M. Segu, and D. Scaramuzza, “A general framework for uncertainty estimation in deep learning,” IEEE Robotics and Automation Letters, vol. 5, pp. 3153–3160, 2019.
  30. R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth, “NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections,” in CVPR, 2021.
  31. X. Pan, Z. Lai, S. Song, and G. Huang, “Activenerf: Learning where to see with uncertainty estimation,” in European Conference on Computer Vision, 2022.
  32. S. Lee, L. Chen, J. Wang, A. Liniger, S. Kumar, and F. Yu, “Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields,” IEEE Robotics and Automation Letters, vol. 7, pp. 12 070–12 077, 2022.
  33. H. Zhan, J. Zheng, Y. Xu, I. D. Reid, and H. Rezatofighi, “Activermap: Radiance field for active mapping and planning,” ArXiv, vol. abs/2211.12656, 2022.
  34. D. Yan, J. Liu, F. Quan, H. Chen, and M.-Y. Fu, “Active implicit object reconstruction using uncertainty-guided next-best-view optimization,” IEEE Robotics and Automation Letters, vol. 8, pp. 6395–6402, 2023.
  35. J. Jarnicka, “Multivariate kernel density estimation with a parametric support,” Opuscula Mathematica, vol. 29, pp. 41–55, 2009.
  36. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  37. C. Qu, W. Liu, and C. J. Taylor, “Bayesian deep basis fitting for depth completion with uncertainty,” in ICCV, 2021.
  38. C. Sun, M. Sun, and H.-T. Chen, “Improved direct voxel grid optimization for radiance fields reconstruction,” ArXiv, vol. abs/2206.05085, 2022.
  39. K. Zhang, G. Riegler, N. Snavely, and V. Koltun, “Nerf++: Analyzing and improving neural radiance fields,” ArXiv, vol. abs/2010.07492, 2020.
Citations (7)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com