Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Depth and DOF Cues Make A Better Defocus Blur Detector (2306.11334v1)

Published 20 Jun 2023 in cs.CV

Abstract: Defocus blur detection (DBD) separates in-focus and out-of-focus regions in an image. Previous approaches mistakenly mistook homogeneous areas in focus for defocus blur regions, likely due to not considering the internal factors that cause defocus blur. Inspired by the law of depth, depth of field (DOF), and defocus, we propose an approach called D-DFFNet, which incorporates depth and DOF cues in an implicit manner. This allows the model to understand the defocus phenomenon in a more natural way. Our method proposes a depth feature distillation strategy to obtain depth knowledge from a pre-trained monocular depth estimation model and uses a DOF-edge loss to understand the relationship between DOF and depth. Our approach outperforms state-of-the-art methods on public benchmarks and a newly collected large benchmark dataset, EBD. Source codes and EBD dataset are available at: https:github.com/yuxinjin-whu/D-DFFNet.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. J. Shi, L. Xu, and J. Jia, “Discriminative blur detection features,” in CVPR, 2014.
  2. X. Yi and M. Eramian, “Lbp-based segmentation of defocus blur,” TIP, 2016.
  3. S. Alireza Golestaneh and L. J. Karam, “Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes,” in CVPR, 2017.
  4. W. Zhao, F. Zhao, D. Wang, and H. Lu, “Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network,” in CVPR, 2018.
  5. ——, “Defocus blur detection via multi-stream bottom-top-bottom network,” TPAMI, 2019.
  6. W. Zhao, B. Zheng, Q. Lin, and H. Lu, “Enhancing diversity of defocus blur detectors via cross-ensemble network,” in CVPR, 2019.
  7. C. Tang, X. Liu, S. An, and P. Wang, “Br2net: Defocus blur detection via a bidirectional channel attention residual refining network,” IEEE Transactions on Multimedia, 2020.
  8. W. Zhao, X. Hou, Y. He, and H. Lu, “Defocus blur detection via boosting diversity of deep ensemble networks,” TIP, 2021.
  9. F. Zhao, H. Lu, W. Zhao, and L. Yao, “Image-scale-symmetric cooperative network for defocus blur detection,” IEEE Transactions on Circuits and Systems for Video Technology, 2021.
  10. X. Cun and C.-M. Pun, “Defocus blur detection via depth distillation,” in ECCV, 2020.
  11. C. Tang, L. Xinwang, X. Zheng, W. Li, J. Xiong, L. Wang, A. Zomaya, and A. Longo, “Defusionnet: Defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features,” TPAMI, 2020.
  12. J. Li, D. Fan, L. Yang, S. Gu, G. Lu, Y. Xu, and D. Zhang, “Layer-output guided complementary attention learning for image defocus blur detection,” TIP, 2021.
  13. Z. Jiang, X. Xu, L. Zhang, C. Zhang, C. S. Foo, and C. Zhu, “Ma-ganet: A multi-attention generative adversarial network for defocus blur detection,” TIP, 2022.
  14. M. L. Andrew Adams, Nora Willet, “Depth of field,” http://graphics.stanford.edu/courses/cs178/applets/dof.html.
  15. J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” IJCV, 2021.
  16. A. Ignatov, J. Patel, and R. Timofte, “Rendering natural camera bokeh effect with deep learning,” in CVPR Workshops, 2020.
  17. M. Qian, M. Xia, C. Sun, Z. Wang, and L. Weng, “Defocus blur detection via salient region detection prior,” arXiv preprint arXiv:2011.09677, 2020.
  18. N. Zhang and J. Yan, “Rethinking the defocus blur detection problem and a real-time deep dbd model,” in ECCV, 2020.
  19. A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
  20. R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” TPAMI, 2020.
  21. W.-H. Li and H. Bilen, “Knowledge distillation for multi-task learning,” in ECCV, 2020.
  22. X. Zheng, L. Huan, G.-S. Xia, and J. Gong, “Parsing very high resolution urban scene images by learning deep convnets with edge-aware loss,” ISPRS Journal of Photogrammetry and Remote Sensing, 2020.
  23. H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, Y. Sun, T. He, J. Mueller, R. Manmatha et al., “Resnest: Split-attention networks,” in CVPR, 2022.
  24. S. Liu, D. Huang, and Y. Wang, “Receptive field block net for accurate and fast object detection,” in ECCV, 2018.
  25. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuxin Jin (4 papers)
  2. Ming Qian (9 papers)
  3. Jincheng Xiong (2 papers)
  4. Nan Xue (61 papers)
  5. Gui-Song Xia (139 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com