Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

REF$^2$-NeRF: Reflection and Refraction aware Neural Radiance Field (2311.17116v4)

Published 28 Nov 2023 in cs.CV

Abstract: Recently, significant progress has been made in the study of methods for 3D reconstruction from multiple images using implicit neural representations, exemplified by the neural radiance field (NeRF) method. Such methods, which are based on volume rendering, can model various light phenomena, and various extended methods have been proposed to accommodate different scenes and situations. However, when handling scenes with multiple glass objects, e.g., objects in a glass showcase, modeling the target scene accurately has been challenging due to the presence of multiple reflection and refraction effects. Thus, this paper proposes a NeRF-based modeling method for scenes containing a glass case. In the proposed method, refraction and reflection are modeled using elements that are dependent and independent of the viewer's perspective. This approach allows us to estimate the surfaces where refraction occurs, i.e., glass surfaces, and enables the separation and modeling of both direct and reflected light components. The proposed method requires predetermined camera poses, but accurately estimating these poses in scenes with glass objects is difficult. Therefore, we used a robotic arm with an attached camera to acquire images with known poses. Compared to existing methods, the proposed method enables more accurate modeling of both glass refraction and the overall scene.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. U. Klank, D. Carton, and M. Beetz, “Transparent object detection and reconstruction on a mobile platform,” in ICRA, 2011, pp. 5971–5978.
  2. S. Sajjan, M. Moore, M. Pan, G. Nagaraja, J. Lee, A. Zeng, and S. Song, “Clear grasp: 3d shape estimation of transparent objects for manipulation,” in ICRA, 2020, pp. 3634–3642.
  3. H. Fang, H.-S. Fang, S. Xu, and C. Lu, “Transcg: A large-scale real-world dataset for transparent object depth completion and a grasping baseline,” IEEE RA-L, vol. 7, no. 3, pp. 7383–7390, 2022.
  4. J. Jiang, G. Cao, T.-T. Do, and S. Luo, “A4t: Hierarchical affordance detection for transparent objects depth reconstruction and manipulation,” IEEE RA-L, vol. 7, no. 4, pp. 9826–9833, 2022.
  5. Y. Tang, J. Chen, Z. Yang, Z. Lin, Q. Li, and W. Liu, “Depthgrasp: Depth completion of transparent objects using self-attentive adversarial network with spectral residual for grasping,” in IROS, 2021, pp. 5710–5716.
  6. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in ECCV, 2020.
  7. J. Ichnowski*, Y. Avigal*, J. Kerr, and K. Goldberg, “Dex-NeRF: Using a neural radiance field to grasp transparent objects,” in CoRL, 2020.
  8. S. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in CVPR, vol. 1, 2006, pp. 519–528.
  9. Y. Furukawa and J. Ponce, “Accurate, dense, and robust multiview stereopsis,” IEEE PAMI, vol. 32, no. 8, pp. 1362–1376, 2009.
  10. J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” in CVPR, 2016, pp. 4104–4113.
  11. C. Griwodz, S. Gasparini, L. Calvet, P. Gurdjos, F. Castan, B. Maujean, G. De Lillo, and Y. Lanthony, “Alicevision meshroom: An open-source 3d reconstruction pipeline,” in ACM MM, 2021, pp. 241–247.
  12. J. Lyu, B. Wu, D. Lischinski, D. Cohen-Or, and H. Huang, “Differentiable refraction-tracing for mesh reconstruction of transparent objects,” ACM TOG, vol. 39, no. 6, nov 2020.
  13. B. Wu, Y. Zhou, Y. Qian, M. Cong, and H. Huang, “Full 3d reconstruction of transparent objects,” ACM TOG, vol. 37, no. 4, pp. 1–11, 2018.
  14. D. Miyazaki, M. Kagesawa, and K. Ikeuchi, “Transparent surface modeling from a pair of polarization images,” IEEE PAMI, vol. 26, no. 1, pp. 73–82, 2004.
  15. Y. Lyu, Z. Cui, S. Li, M. Pollefeys, and B. Shi, “Reflection separation using a pair of unpolarized and polarized images,” in NeurIPS, vol. 32, 2019.
  16. X. Qiao, A. Yamashita, and H. Asama, “Underwater structure from motion for cameras under refractive surfaces,” Journal of Robotics and Mechatronics, vol. 31, no. 4, pp. 603–611, 2019.
  17. C. Beall, B. J. Lawrence, V. Ila, and F. Dellaert, “3d reconstruction of underwater structures,” in IROS, 2010, pp. 4418–4423.
  18. A. Jordt, K. Köser, and R. Koch, “Refractive 3d reconstruction on underwater images,” Methods in Oceanography, vol. 15, pp. 90–113, 2016.
  19. F. Chadebecq, F. Vasconcelos, G. Dwyer, R. Lacher, S. Ourselin, T. Vercauteren, and D. Stoyanov, “Refractive structure-from-motion through a flat refractive interface,” in ICCV, 2017, pp. 5325–5333.
  20. Z. Li, Y.-Y. Yeh, and M. Chandraker, “Through the looking glass: neural 3d reconstruction of transparent shapes,” in CVPR, 2020, pp. 1262–1271.
  21. N. Max, “Optical models for direct volume rendering,” IEEE TVCG, vol. 1, no. 2, pp. 99–108, 1995.
  22. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM TOG, vol. 41, no. 4, pp. 1–15, 2022.
  23. S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa, “Plenoxels: Radiance fields without neural networks,” in CVPR, 2022, pp. 5501–5510.
  24. A. Yu, R. Li, M. Tancik, H. Li, R. Ng, and A. Kanazawa, “Plenoctrees for real-time rendering of neural radiance fields,” in ICCV, 2021, pp. 5752–5761.
  25. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields,” in ICCV, 2021, pp. 5855–5864.
  26. J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” in CVPR, 2022, pp. 5470–5479.
  27. C. Wang, X. Wu, Y.-C. Guo, S.-H. Zhang, Y.-W. Tai, and S.-M. Hu, “Nerf-sr: High quality neural radiance fields using supersampling,” in ACM MM, 2022, pp. 6445–6454.
  28. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelnerf: Neural radiance fields from one or few images,” in CVPR, 2021, pp. 4578–4587.
  29. C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey, “Barf: Bundle-adjusting neural radiance fields,” in ICCV, 2021, pp. 5741–5751.
  30. R. Martin-Brualla, N. Radwan, M. S. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth, “Nerf in the wild: Neural radiance fields for unconstrained photo collections,” in CVPR, 2021, pp. 7210–7219.
  31. D. Verbin, P. Hedman, B. Mildenhall, T. Zickler, J. T. Barron, and P. P. Srinivasan, “Ref-nerf: Structured view-dependent appearance for neural radiance fields,” in CVPR, 2022, pp. 5481–5490.
  32. K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Martin-Brualla, “Nerfies: Deformable neural radiance fields,” in ICCV, 2021, pp. 5865–5874.
  33. Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu, “Nerf–: Neural radiance fields without known camera parameters,” arXiv preprint arXiv:2102.07064, 2021.
  34. P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” NeurIPS, 2021.
  35. Y. Wang, Q. Han, M. Habermann, K. Daniilidis, C. Theobalt, and L. Liu, “Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction,” in ICCV, 2023.
  36. J. Zeng, Y. Li, Y. Ran, S. Li, F. Gao, L. Li, S. He, J. Chen, and Q. Ye, “Efficient view path planning for autonomous implicit reconstruction,” in ICRA, 2023, pp. 4063–4069.
  37. S. Lee, L. Chen, J. Wang, A. Liniger, S. Kumar, and F. Yu, “Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields,” IEEE RA-L, vol. 7, no. 4, pp. 12 070–12 077, 2022.
  38. L. Jin, X. Chen, J. Rückin, and M. Popović, “Neu-nbv: Next best view planning using uncertainty estimation in image-based neural rendering,” in IROS, 2023, pp. 11 305–11 312.
  39. Y. Ran, J. Zeng, S. He, J. Chen, L. Li, Y. Chen, G. Lee, and Q. Ye, “Neurar: Neural uncertainty for autonomous 3d reconstruction with implicit neural representations,” IEEE RA-L, vol. 8, no. 2, pp. 1125–1132, 2023.
  40. D. Yan, J. Liu, F. Quan, H. Chen, and M. Fu, “Active implicit object reconstruction using uncertainty-guided next-best-view optimization,” IEEE RA-L, 2023.
  41. S. Zhou, S. Xie, R. Ishikawa, K. Sakurada, M. Onishi, and T. Oishi, “Inf: Implicit neural fusion for lidar and camera,” in IROS, 2023, pp. 10 918–10 925.
  42. Y.-C. Guo, D. Kang, L. Bao, Y. He, and S.-H. Zhang, “Nerfren: Neural radiance fields with reflections,” in CVPR, 2022, pp. 18 409–18 418.
  43. J. Qiu, P.-T. Jiang, Y. Zhu, Z.-X. Yin, M.-M. Cheng, and B. Ren, “Looking through the glass: Neural surface reconstruction against high specular reflections,” in CVPR, 2023, pp. 20 823–20 833.
  44. C. Zhu, R. Wan, and B. Shi, “Neural transmitted radiance fields,” in NeurIPS, vol. 35.   Curran Associates, Inc., 2022, pp. 38 994–39 006.
  45. Z.-X. Yin, J. Qiu, M.-M. Cheng, and B. Ren, “Multi-space neural radiance fields,” in CVPR, June 2023, pp. 12 407–12 416.
  46. G. Kopanas, T. Leimkühler, G. Rainer, C. Jambon, and G. Drettakis, “Neural point catacaustics for novel-view synthesis of reflections,” ACM TOG, vol. 41, no. 6, nov 2022.
  47. T. Fujitomi, K. Sakurada, R. Hamaguchi, H. Shishido, M. Onishi, and Y. Kameda, “Lb-nerf: light bending neural radiance fields for transparent medium,” in ICIP, 2022, pp. 2142–2146.
  48. J.-I. Pan, J.-W. Su, K.-W. Hsiao, T.-Y. Yen, and H.-K. Chu, “Sampling neural radiance fields for refractive objects,” in SIGGRAPH Asia 2022 Technical Communications, 2022, pp. 1–4.
  49. Z. Wang, W. Yang, J. Cao, Q. Hu, L. Xu, J. Yu, and J. Yu, “Neref: Neural refractive field for fluid surface reconstruction and rendering,” in ICCP, 2023, pp. 1–11.
  50. J. Tong, S. Muthu, F. A. Maken, C. Nguyen, and H. Li, “Seeing through the glass: Neural 3d reconstruction of object inside a transparent container,” in CVPR, June 2023, pp. 12 555–12 564.
  51. Y. Zhan, S. Nobuhara, K. Nishino, and Y. Zheng, “Nerfrac: Neural radiance fields through refractive surface,” in ICCV, October 2023, pp. 18 402–18 412.
  52. J. Kerr, L. Fu, H. Huang, Y. Avigal, M. Tancik, J. Ichnowski, A. Kanazawa, and K. Goldberg, “Evo-nerf: Evolving nerf for sequential robot grasping of transparent objects,” in CoRL, 2022.
  53. Q. Dai, Y. Zhu, Y. Geng, C. Ruan, J. Zhang, and H. Wang, “Graspnerf: Multiview-based 6-dof grasp detection for transparent and specular objects using generalizable nerf,” in ICRA, 2023, pp. 1757–1763.
  54. “blender.org - home of the blender project - free and open 3d creation software.” [Online]. Available: https://www.blender.org/
  55. Y. Wang, S. James, E. K. Stathopoulou, C. Beltrán-González, Y. Konishi, and A. Del Bue, “Autonomous 3-d reconstruction, mapping, and exploration of indoor environments with a robotic arm,” IEEE RA-L, vol. 4, no. 4, pp. 3340–3347, 2019.
  56. H. Aanæs, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl, “Large-scale data for multiple-view stereopsis,” IJCV, vol. 120, pp. 153–168, 2016.
  57. L. Yen-Chen, “Nerf-pytorch,” https://github.com/yenchenlin/nerf-pytorch/, 2020.
  58. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE TIP, vol. 13, no. 4, pp. 600–612, 2004.
  59. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.