Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep learning in motion deblurring: current status, benchmarks and future prospects (2401.05055v2)

Published 10 Jan 2024 in cs.CV

Abstract: Motion deblurring is one of the fundamental problems of computer vision and has received continuous attention. The variability in blur, both within and across images, imposes limitations on non-blind deblurring techniques that rely on estimating the blur kernel. As a response, blind motion deblurring has emerged, aiming to restore clear and detailed images without prior knowledge of the blur type, fueled by the advancements in deep learning methodologies. Despite strides in this field, a comprehensive synthesis of recent progress in deep learning-based blind motion deblurring is notably absent. This paper fills that gap by providing an exhaustive overview of the role of deep learning in blind motion deblurring, encompassing datasets, evaluation metrics, and methods developed over the last six years. Specifically, we first introduce the types of motion blur and the fundamental principles of deblurring. Next, we outline the shortcomings of traditional non-blind deblurring algorithms, emphasizing the advantages of employing deep learning techniques for deblurring tasks. Following this, we categorize and summarize existing blind motion deblurring methods based on different backbone networks, including convolutional neural networks, generative adversarial networks, recurrent neural networks, and Transformer networks. Subsequently, we elaborate not only on the fundamental principles of these different categories but also provide a comprehensive summary and comparison of their advantages and limitations. Qualitative and quantitative experimental results conducted on four widely used datasets further compare the performance of SOTA methods. Finally, an analysis of present challenges and future pathways. All collected models, benchmark datasets, source code links, and codes for evaluation have been made publicly available at https://github.com/VisionVerse/Blind-Motion-Deblurring-Survey

Definition Search Book Streamline Icon: https://streamlinehq.com
References (153)
  1. Burdziakowski, P. A Novel Method for the Deblurring of Photogrammetric Images Using Conditional Generative Adversarial Networks. Remote Sensing 2020, 12, 2586.
  2. Motion blur removal for UAV-based wind turbine blade images using synthetic datasets. Remote Sensing 2021, 14, 87.
  3. Image registration and change detection under rolling shutter motion blur. IEEE Transactions on Pattern Analysis and Machine Intelligence 2016, 39, 1959–1972.
  4. Detection-friendly dehazing: object detection in real-world hazy scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 2023, 45, 8284–8295.
  5. Rajagopalan, A.; et al. Improving Robustness of Semantic Segmentation to Motion-Blur Using Class-Centric Augmentation. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10470–10479.
  6. Position-Aware Relation Learning for RGB-Thermal Salient Object Detection. IEEE Transactions on Image Processing 2023, 32, 2593–2607.
  7. Multi-scale blur estimation and edge type classification for scene analysis. International Journal of Computer Vision 1997, 24, 219–250.
  8. PixelGame: Infrared small target segmentation as a Nash equilibrium. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2022, 15, 8010–8024.
  9. Airborne SAR Autofocus Based on Blurry Imagery Classification. Remote Sensing 2021, 13, 3872.
  10. Single-image blind deblurring using multi-scale latent structure prior. IEEE Transactions on Circuits and Systems for Video Technology 2019, 30, 2033–2045.
  11. Blind remote sensing image deblurring using local binary pattern prior. Remote Sensing 2022, 14, 1276.
  12. Deep mean-shift priors for image restoration. Advances in Neural Information Processing Systems 2017, 30.
  13. Good image priors for non-blind deconvolution: generic vs. specific. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13. Springer, 2014, pp. 231–246.
  14. Learning deep CNN denoiser prior for image restoration. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3929–3938.
  15. Blind image deblurring using dark channel prior. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1628–1636.
  16. Blind image deblurring with local maximum gradient prior. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1742–1750.
  17. Deblurring low-light images with light streaks. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3382–3389.
  18. Image deblurring via extreme channels prior. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4003–4011.
  19. Surface-aware blind image deblurring. IEEE Transactions on Pattern Analysis and Machine Intelligence 2019, 43, 1041–1055.
  20. A Comprehensive Review of Deep Learning-Based Real-World Image Restoration. IEEE Access 2023.
  21. Deep image deblurring: A survey. International Journal of Computer Vision 2022, 130, 2103–2130.
  22. Levin, A. Blind motion deblurring using image statistics. Advances in Neural Information Processing Systems 2006, 19.
  23. A variational EM framework with adaptive edge selection for blind motion deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 10167–10176.
  24. Removing camera shake from a single photograph. In Acm Siggraph 2006 Papers; 2006; pp. 787–794.
  25. Blind deconvolution using a normalized sparsity measure. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2011, pp. 233–240.
  26. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 1107–1114.
  27. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2011, pp. 2657–2664.
  28. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP). IEEE, 2013, pp. 1–8.
  29. Image blind deblurring using an adaptive patch prior. Tsinghua Science and Technology 2018, 24, 238–248.
  30. Blind deblurring with sparse representation via external patch priors. Digital Signal Processing 2018, 78, 322–331.
  31. Deblurring images via dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence 2017, 40, 2315–2328.
  32. Blind image deblurring using a non-linear channel prior based on dark and bright channels. IEEE Transactions on Image Processing 2021, 30, 6970–6984.
  33. Dark and bright channel prior embedded network for dynamic scene deblurring. IEEE Transactions on Image Processing 2020, 29, 6885–6897.
  34. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009, pp. 1964–1971.
  35. Deblurring text images via L0-regularized intensity and gradient prior. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2901–2908.
  36. Gradient-based discriminative modeling for blind image deblurring. Neurocomputing 2020, 413, 305–327.
  37. Single image super-resolution with non-local means and steering kernel regression. IEEE Transactions on Image Processing 2012, 21, 4544–4556.
  38. Frequency-aware Feature Aggregation Network with Dual-task Consistency for RGB-T Salient Object Detection. Pattern Recognition 2024, 146, 110043.
  39. Blur removal via blurred-noisy image pair. IEEE Transactions on Image Processing 2020, 30, 345–359.
  40. Rethinking Image Deblurring via CNN-Transformer Multiscale Hybrid Architecture. IEEE Transactions on Instrumentation and Measurement 2022, 72, 1–15.
  41. Spatially-adaptive image restoration using distortion-guided networks. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2309–2319.
  42. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision. Springer, 2022, pp. 17–33.
  43. Motion aware double attention network for dynamic scene deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1113–1123.
  44. Blur-invariant deep learning for blind-deblurring. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4752–4760.
  45. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Advances in Neural Information Processing Systems 2016, 29.
  46. Generative adversarial nets. Advances in Neural Information Processing Systems 2014, 27.
  47. A Review of GAN-Based Super-Resolution Reconstruction for Optical Remote Sensing Images. Remote Sensing 2023, 15, 5062.
  48. Dynamic scene deblurring using spatially variant recurrent neural networks. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2521–2529.
  49. WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring. Signal, Image and Video Processing 2023, pp. 1–9.
  50. Perceptual variousness motion deblurring with light global context refinement. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4116–4125.
  51. Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3555–3564.
  52. Motion deblurring in the wild. In Proceedings of the Pattern Recognition: 39th German Conference, GCPR 2017, Basel, Switzerland, September 12–15, 2017, Proceedings 39. Springer, 2017, pp. 65–77.
  53. Learning to deblur. IEEE Transactions on Pattern Analysis and Machine Intelligence 2015, 38, 1439–1451.
  54. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 769–777.
  55. Cronje, J. Deep convolutional neural networks for dense non-uniform motion deblurring. In Proceedings of the 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ). IEEE, 2015, pp. 1–5.
  56. Chakrabarti, A. A neural approach to blind motion deblurring. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14. Springer, 2016, pp. 221–235.
  57. From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2319–2328.
  58. Motion blur kernel estimation via deep learning. IEEE Transactions on Image Processing 2017, 27, 194–205.
  59. Deblurring using analysis-synthesis networks pair. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 5811–5820.
  60. Deep convolutional neural network for image deconvolution. Advances in Neural Information Processing Systems 2014, 27.
  61. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3883–3891.
  62. Region-adaptive dense network for efficient motion deblurring. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, 2020, Vol. 34, pp. 11882–11889.
  63. Scale-recurrent network for deep image deblurring. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8174–8182.
  64. Event-based image deblurring with dynamic motion awareness. In Proceedings of the European Conference on Computer Vision. Springer, 2022, pp. 95–112.
  65. Dynamic scene Image deblurring using modified scale-recurrent network. In Proceedings of the 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA). IEEE, 2020, pp. 966–973.
  66. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4641–4650.
  67. Deep stacked hierarchical multi-patch network for image deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5978–5986.
  68. Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3606–3615.
  69. Event-guided multi-patch network with self-supervision for non-uniform motion deblurring. International Journal of Computer Vision 2023, 131, 453–470.
  70. Multi-stage progressive image restoration. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 14821–14831.
  71. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In Proceedings of the European Conference on Computer Vision. Springer, 2020, pp. 327–343.
  72. Generalizing Event-Based Motion Deblurring in Real-World Scenarios. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 10734–10744.
  73. CrossZoom: Simultaneously Motion Deblurring and Event Super-Resolving. arXiv preprint arXiv:2309.16949 2023.
  74. Overview of Blind Deblurring Methods for Single Image. Journal of Frontiers of Computer Science & Technology 2022, 16.
  75. Deblurring dynamic scenes via spatially varying recurrent neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 2021, 44, 3974–3987.
  76. Deep generative filter for motion deblurring. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision workshops, 2017, pp. 2993–3000.
  77. Tell me where it is still blurry: Adversarial blurred region mining and refining. In Proceedings of the Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 702–710.
  78. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 2014.
  79. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8183–8192.
  80. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning. PMLR, 2017, pp. 214–223.
  81. Improved training of wasserstein gans. Advances in Neural Information Processing Systems 2017, 30.
  82. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14. Springer, 2016, pp. 694–711.
  83. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8878–8887.
  84. MND-GAN: A Research on Image Deblurring Algorithm Based on Generative Adversarial Network. In Proceedings of the 2023 42nd Chinese Control Conference (CCC). IEEE, 2023, pp. 7584–7589.
  85. Edge heuristic GAN for non-uniform blind deblurring. IEEE Signal Processing Letters 2019, 26, 1546–1550.
  86. Deblurring by realistic blurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2737–2746.
  87. Unsupervised class-specific deblurring. In Proceedings of the Proceedings of the European Conference on Computer Vision, 2018, pp. 353–369.
  88. Which training methods for GANs do actually converge? In Proceedings of the International Conference on Machine Learning. PMLR, 2018, pp. 3481–3490.
  89. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232.
  90. Structure-aware motion deblurring using multi-adversarial optimized cyclegan. IEEE Transactions on Image Processing 2021, 30, 6142–6155.
  91. Real-aware motion deblurring using multi-attention CycleGAN with contrastive guidance. Digital Signal Processing 2023, 135, 103953.
  92. UID-GAN: Unsupervised image deblurring via disentangled representations. IEEE Transactions on Biometrics, Behavior, and Identity Science 2019, 2, 26–39.
  93. FCL-GAN: A lightweight and real-time baseline for unsupervised blind image deblurring. In Proceedings of the Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6220–6229.
  94. Application of Ghost-DeblurGAN to Fiducial Marker Detection. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 6827–6832.
  95. Attention is all you need. Advances in Neural Information Processing Systems 2017, 30.
  96. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022.
  97. Multispectral fusion transformer network for RGB-thermal urban scene semantic segmentation. IEEE Geoscience and Remote Sensing Letters 2022, 19, 1–5.
  98. Do vision transformers see like convolutional neural networks? Advances in Neural Information Processing Systems 2021, 34, 12116–12128.
  99. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17683–17693.
  100. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.
  101. U2-former: A nested u-shaped transformer for image restoration. arXiv preprint arXiv:2112.02279 2021.
  102. Stripformer: Strip transformer for fast image deblurring. In Proceedings of the European Conference on Computer Vision. Springer, 2022, pp. 146–162.
  103. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739.
  104. Efficient Frequency Domain-based Transformers for High-Quality Image Deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5886–5895.
  105. Blur Interpolation Transformer for Real-World Motion from Blur. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5713–5723.
  106. Edgeformer: Edge-Enhanced Transformer for High-Quality Image Deblurring. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2023, pp. 504–509.
  107. Hierarchical Patch Aggregation Transformer For Motion Deblurring 2023.
  108. Image Deblurring by Exploring In-depth Properties of Transformer. arXiv preprint arXiv:2303.15198 2023.
  109. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 2020.
  110. Adaptive Window Pruning for Efficient Local Motion Deblurring. arXiv preprint arXiv:2306.14268 2023.
  111. SharpFormer: Learning Local Feature Preserving Global Representations for Image Deblurring. IEEE Transactions on Image Processing 2023.
  112. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VII 12. Springer, 2012, pp. 27–40.
  113. Human-aware motion deblurring. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5572–5581.
  114. Learning event-based motion deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3320–3329.
  115. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16. Springer, 2020, pp. 184–201.
  116. Realistic blur synthesis for learning image deblurring. In Proceedings of the European Conference on Computer Vision. Springer, 2022, pp. 487–503.
  117. Real-world deep local motion deblurring. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, 2023, Vol. 37, pp. 1314–1322.
  118. Learning a discriminative prior for blind image deblurring. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6616–6625.
  119. Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices. Trends in Food Science & Technology 2021, 113, 193–204.
  120. Deep learning on image denoising: An overview. Neural Networks 2020, 131, 251–275.
  121. Unicornn: A recurrent model for learning very long time dependencies. In Proceedings of the International Conference on Machine Learning. PMLR, 2021, pp. 9168–9178.
  122. Efficient human motion prediction using temporal convolutional generative adversarial network. Information Sciences 2021, 545, 427–447.
  123. Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3848–3856.
  124. Dynamic scene deblurring by depth guided model. IEEE Transactions on Image Processing 2020, 29, 5273–5288.
  125. Deep convolutional-neural-network-based channel attention for single image dynamic scene blind deblurring. IEEE Transactions on Circuits and Systems for Video Technology 2020, 31, 2994–3009.
  126. Sdwnet: A straight dilated network with wavelet transformation for image deblurring. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1895–1904.
  127. Mssnet: Multi-scale-stage network for single image deblurring. In Proceedings of the European Conference on Computer Vision. Springer, 2022, pp. 524–539.
  128. Hinet: Half instance normalization network for image restoration. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 182–192.
  129. Banet: a blur-aware attention network for dynamic scene deblurring. IEEE Transactions on Image Processing 2022, 31, 6789–6799.
  130. Irnext: Rethinking convolutional network design for image restoration 2023.
  131. QoE beyond the MOS: an in-depth look at QoE via better metrics and their relation to MOS. Quality and User Experience 2016, 1, 1–23.
  132. Revisiting Image Deblurring with an Efficient ConvNet. arXiv preprint arXiv:2302.02234 2023.
  133. Real-world video deblurring: A benchmark dataset and an efficient recurrent neural network. International Journal of Computer Vision 2023, 131, 284–301.
  134. Recurrent neural networks with intra-frame iterations for video deblurring. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8102–8111.
  135. Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, 2022, Vol. 36, pp. 3598–3607.
  136. Stochastic window transformer for image restoration. Advances in Neural Information Processing Systems 2022, 35, 9315–9329.
  137. A survey of deep learning approaches to image restoration. Neurocomputing 2022, 487, 46–65.
  138. Li, C. A survey on image deblurring. arXiv preprint arXiv:2202.07456 2022.
  139. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13, 600–612.
  140. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition. IEEE, 2010, pp. 2366–2369.
  141. Maxim: Multi-axis mlp for image processing. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5769–5780.
  142. Intriguing findings of frequency selection for image deblurring. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, 2023, Vol. 37, pp. 1905–1913.
  143. Deblur-yolo: Real-time object detection with efficient blind motion deblurring. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021, pp. 1–8.
  144. Deep Feature Deblurring Diffusion for Detecting Out-of-Distribution Objects. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13381–13391.
  145. Attacking Defocus Detection With Blur-Aware Transformation for Defocus Deblurring. IEEE Transactions on Multimedia 2023.
  146. Soft-segmentation guided object motion deblurring. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 459–468.
  147. Blind image deblurring via superpixel segmentation prior. IEEE Transactions on Circuits and Systems for Video Technology 2021, 32, 1467–1482.
  148. Few-shot learning via saliency-guided hallucination of samples. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2770–2779.
  149. Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9137–9146.
  150. Deblurring Ghost Imaging Reconstruction Based on Underwater Dataset Generated by Few-Shot Learning. Sensors 2022, 22, 6161.
  151. Multiscale structure guided diffusion for image deblurring. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 10721–10733.
  152. Deblurring via stochastic refinement. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16293–16303.
  153. Crnet: Unsupervised color retention network for blind motion deblurring. In Proceedings of the Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6193–6201.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yawen Xiang (2 papers)
  2. Heng Zhou (47 papers)
  3. Chengyang Li (22 papers)
  4. Fangwei Sun (1 paper)
  5. Zhongbo Li (5 papers)
  6. Yongqiang Xie (5 papers)
Citations (2)

Summary

Insights into Deep Learning Techniques for Blind Motion Deblurring

The paper "Application of Deep Learning in Blind Motion Deblurring: Current Status and Future Prospects" provides a comprehensive review of recent advancements in the domain of blind motion deblurring, a critical subfield of computer vision. This field focuses on restoring sharp images from blurred ones without assuming prior knowledge about the blur. Traditional non-blind methods are limited due to their dependency on precise blur kernel estimation. In response, blind motion deblurring, enhanced through deep learning, emerges as a capable alternative.

Overview

The paper traces the developments over the past six years, highlighting the evolution of deep learning methodologies applied to blind motion deblurring. The authors categorize existing approaches based on their underlying architectures, specifically focusing on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and Transformer networks. Each category is analyzed for its fundamental principles, highlighting the advantages and limitations through both qualitative and quantitative experimental comparisons across multiple datasets.

Significantly, the review identifies CNN-based methods as pioneering efforts, driven by their robust spatial feature learning. However, their limitations in capturing long-range dependencies call for RNNs, which exploit temporal information for sequence data. Meanwhile, GANs introduce an adversarial framework that enhances the realism of deblurred images, though they struggle with stability in training. The recent introduction of Transformers brings remarkable improvements in handling non-uniform blur due to their attention mechanisms, which capture complex dependencies across image sequences.

Dataset Utilization and Evaluation

The paper systematically reviews available datasets, differentiating between synthetic and real-world collections. This distinction is crucial as synthetic datasets, like the GoPro, offer controlled environments for training but lack generalization for real-world scenarios, which are better represented by datasets such as RealBlur. The evaluation standards rely heavily on objective metrics like PSNR and SSIM, which quantify deblurring efficacy and correlate with subjective human assessment, albeit imperfectly.

Major Findings and Implications

Transformer-based models demonstrate the highest performance across metrics, achieving superior PSNR and SSIM values on challenging datasets. These results underscore the potential of attention mechanisms in handling complex, multi-dimensional blur patterns, suggesting that future advancements may leverage hybrid models that integrate the strengths of CNNs, RNNs, and Transformers. However, the generalization of these models to real-world data remains a significant challenge due to the domain shift, emphasizing the need for more diverse and authentic training datasets.

The exploration of lightweight models also emerges as a critical research direction, with real-time applications necessitating efficient, less computationally-intensive architectures without sacrificing deblurring quality.

Future Directions

The paper advocates for several future advancements in blind motion deblurring. The most pressing challenge is enhancing the generalization capabilities of models across varied real-world scenarios, which can be potentially addressed through unsupervised learning techniques and few-shot learning paradigms. Additionally, future work should explore more innovative network structures and objective evaluation metrics to better capture and quantitatively assess deblurring performance. The authors highlight the integration of diffusion models and the exploration of blur removal as a pre-processing step for higher-level visual tasks such as object detection and segmentation.

Conclusion

In conclusion, the surveyed paper provides an exhaustive insight into the advancements and challenges in blind motion deblurring using deep learning techniques. While significant progress has been made, the field continues to grapple with challenges related to generalization, data availability, and the balance between model complexity and performance. The path forward lies in embracing hybrid models and leveraging advancements in related fields to enhance both the theoretical understanding and practical applicability of image deblurring technologies.