Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using Sharpness Prior (2401.00825v1)

Published 1 Jan 2024 in cs.CV, cs.GR, and eess.IV

Abstract: Neural Radiance Fields (NeRF) have shown remarkable performance in neural rendering-based novel view synthesis. However, NeRF suffers from severe visual quality degradation when the input images have been captured under imperfect conditions, such as poor illumination, defocus blurring, and lens aberrations. Especially, defocus blur is quite common in the images when they are normally captured using cameras. Although few recent studies have proposed to render sharp images of considerably high-quality, yet they still face many key challenges. In particular, those methods have employed a Multi-Layer Perceptron (MLP) based NeRF, which requires tremendous computational time. To overcome these shortcomings, this paper proposes a novel technique Sharp-NeRF -- a grid-based NeRF that renders clean and sharp images from the input blurry images within half an hour of training. To do so, we used several grid-based kernels to accurately model the sharpness/blurriness of the scene. The sharpness level of the pixels is computed to learn the spatially varying blur kernels. We have conducted experiments on the benchmarks consisting of blurry images and have evaluated full-reference and non-reference metrics. The qualitative and quantitative results have revealed that our approach renders the sharp novel views with vivid colors and fine details, and it has considerably faster training time than the previous works. Our project page is available at https://benhenryl.github.io/SharpNeRF/

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. Improving single-image defocus deblurring: How dual-pixel images help through multi-task learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1231–1239, 2022.
  2. Defocus deblurring using dual-pixel data. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, pages 111–126. Springer, 2020.
  3. Robust focus volume regularization in shape from focus. IEEE Transactions on Image Processing, 30:7215–7227, 2021.
  4. Hexplane: A fast representation for dynamic scenes. CVPR, 2023.
  5. Tensorf: Tensorial radiance fields. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII, pages 333–350. Springer, 2022.
  6. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. arXiv preprint arXiv:2103.15595, 2021.
  7. Hybrid neural rendering for large-scale scenes with motion blur. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
  8. Volume rendering. SIGGRAPH Comput. Graph., 22(4):65–74, jun 1988.
  9. Incorporating second-order functional knowledge for better option pricing. Advances in neural information processing systems, 13, 2000.
  10. K-planes: Explicit radiance fields in space, time, and appearance. In CVPR, 2023.
  11. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501–5510, 2022.
  12. Focus is all you need: Loss functions for event-based vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12280–12289, 2019.
  13. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14346–14355, 2021.
  14. Baking neural radiance fields for real-time view synthesis. ICCV, 2021.
  15. Efficientnerf efficient neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12902–12911, 2022.
  16. Eric Krotkov. Focusing. International Journal of Computer Vision, 1(3):223–237, 1988.
  17. Deblurred neural radiance field with physical scene priors. arXiv preprint arXiv:2211.12046, 2022.
  18. Deep defocus map estimation using domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  19. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5741–5751, 2021.
  20. Vision transformer for nerf-based view synthesis from a single input image. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 806–815, 2023.
  21. Estimating generalized gaussian blur kernels for out-of-focus image deblurring. IEEE Transactions on circuits and systems for video technology, 31(3):829–843, 2020.
  22. Deblur-nerf: Neural radiance fields from blurry images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12861–12870, 2022.
  23. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  24. No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695–4708, 2012.
  25. Making a “completely blind” image quality analyzer. IEEE Signal processing letters, 20(3):209–212, 2012.
  26. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics (ToG), 41(4):1–15, 2022.
  27. Shape from focus. IEEE Transactions on Pattern analysis and machine intelligence, 16(8):824–831, 1994.
  28. Blur-invariant deep learning for blind-deblurring. In Proceedings of the IEEE international conference on computer vision, pages 4752–4760, 2017.
  29. Blind image deblurring using dark channel prior. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1628–1636, 2016.
  30. Pdrf: Progressively deblurring radiance field for fast and robust scene reconstruction from blurry images, 2022.
  31. Analysis of focus measure operators for shape-from-focus. Pattern Recognition, 46(5):1415–1432, 2013.
  32. Vision transformers for dense prediction. ICCV, 2021.
  33. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335–14345, 2021.
  34. Merf: Memory-efficient radiance fields for real-time view synthesis in unbounded scenes. SIGGRAPH, 2023.
  35. Neural blind deconvolution using deep priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3341–3350, 2020.
  36. William Hadley Richardson. Bayesian-based iterative method of image restoration. JoSA, 62(1):55–59, 1972.
  37. Learning to deblur using light field generated and real defocus images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16304–16313, 2022.
  38. Aifnet: All-in-focus image restoration network using a light field-based dataset. IEEE Transactions on Computational Imaging, 7:675–688, 2021.
  39. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  40. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016.
  41. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459–5469, 2022.
  42. Depth from focus with your mobile phone. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3497–3506, 2015.
  43. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537–7547, 2020.
  44. Fourier plenoctrees for dynamic radiance field rendering in real-time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13524–13534, June 2022.
  45. F2-nerf: Fast neural radiance field training with free camera trajectories. CVPR, 2023.
  46. Bad-nerf: Bundle adjusted deblur neural radiance fields. arXiv preprint arXiv:2211.12853, 2022.
  47. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  48. Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021.
  49. Non-uniform deblurring for shaken images. International journal of computer vision, 98:168–186, 2012.
  50. Learning depth from focus in the wild. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part I, pages 1–18. Springer, 2022.
  51. Dof-nerf: Depth-of-field meets neural radiance fields. In Proceedings of the 30th ACM International Conference on Multimedia, pages 1718–1729, 2022.
  52. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1107–1114, 2013.
  53. Deep depth from focus with differential focus volume. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12642–12651, 2022.
  54. Freenerf: Improving few-shot neural rendering with free frequency regularization. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2023.
  55. Bakedsdf: Meshing neural sdfs for real-time view synthesis. arXiv, 2023.
  56. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752–5761, 2021.
  57. pixelNeRF: Neural radiance fields from one or few images. In CVPR, 2021.
  58. Restormer: Efficient transformer for high-resolution image restoration. In CVPR, 2022.
  59. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5978–5986, 2019.
  60. Pixel screening based intermediate correction for blind deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5892–5900, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Byeonghyeon Lee (6 papers)
  2. Howoong Lee (2 papers)
  3. Usman Ali (33 papers)
  4. Eunbyung Park (42 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com