Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware Depth-from-Defocus (2402.18175v1)

Published 28 Feb 2024 in cs.CV and eess.IV

Abstract: In this paper, we address the task of aberration-aware depth-from-defocus (DfD), which takes account of spatially variant point spread functions (PSFs) of a real camera. To effectively obtain the spatially variant PSFs of a real camera without requiring any ground-truth PSFs, we propose a novel self-supervised learning method that leverages the pair of real sharp and blurred images, which can be easily captured by changing the aperture setting of the camera. In our PSF estimation, we assume rotationally symmetric PSFs and introduce the polar coordinate system to more accurately learn the PSF estimation network. We also handle the focus breathing phenomenon that occurs in real DfD situations. Experimental results on synthetic and real data demonstrate the effectiveness of our method regarding both the PSF estimation and the depth estimation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. “Depth measurement by the multi-focus camera,” in IEEE Conference on Computer Vision and Pattern Recognition, 1998, pp. 953–959.
  2. “Depth from focus with your mobile phone,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3497–3506.
  3. “Depth from defocus: A spatial domain approach,” International Journal of Computer Vision, vol. 13, no. 3, pp. 271–294, 1994.
  4. “Depth estimation network for dual defocused images with different depth-of-field,” in IEEE International Conference on Image Processing, 2018, pp. 1563–1567.
  5. “Focus on defocus: bridging the synthetic to real domain gap for depth estimation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2020, pp. 1071–1080.
  6. “Bridging unsupervised and supervised depth from focus via all-in-focus supervision,” in IEEE/CVF International Conference on Computer Vision, 2021, pp. 12621–12631.
  7. “Deep depth from focus with differential focus volume,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12642–12651.
  8. “Blur calibration for depth from defocus,” in Conference on Computer and Robot Vision, 2016, pp. 281–288.
  9. “What is a good model for depth from defocus?,” in Conference on Computer and Robot Vision, 2016, pp. 273–280.
  10. “Psf estimation using sharp edge prediction,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1–8.
  11. “Camera intrinsic blur kernel estimation: A reliable framework,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 4961–4968.
  12. “Non-parametric sub-pixel local point spread function estimation,” Image Processing on Line, vol. 2, pp. 8–21, 2012.
  13. “Aberration-aware depth-from-focus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  14. “Differentiable compound optics and processing pipeline optimization for end-to-end camera design,” ACM Transactions on Graphics, vol. 40, no. 2, pp. 1–19, 2021.
  15. “Integrating lens design with digital camera simulation,” in Digital Photography, 2005, vol. 5678, pp. 48–58.
  16. Sidney Ray, Applied photographic optics, Routledge, 2002.
  17. Andy Rowlands, Physics of digital photography, IOP Publishing, 2017.
  18. “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
  19. “Are realistic training data necessary for depth-from-defocus networks?,” in Annual Conference of the IEEE Industrial Electronics Society, 2022, pp. 1–6.
  20. “Self-supervised single-image depth estimation from focus and defocus clues,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6281–6288, 2021.
  21. “Deep defocus map estimation using domain adaptation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 12222–12230.
  22. “Depth recovery from blurred edges,” in IEEE Conference on Computer Vision and Pattern Recognition, 1988, pp. 498–499.
  23. Roland Hess, Blender foundations: The essential guide to learning blender 2.5, Taylor & Francis, 2013.
  24. “Describing textures in the wild,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3606–3613.

Summary

We haven't generated a summary for this paper yet.