Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scale-recurrent Network for Deep Image Deblurring (1802.01770v1)

Published 6 Feb 2018 in cs.CV

Abstract: In single image deblurring, the "coarse-to-fine" scheme, i.e. gradually restoring the sharp image on different resolutions in a pyramid, is very successful in both traditional optimization-based methods and recent neural-network-based approaches. In this paper, we investigate this strategy and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task. Compared with the many recent learning-based approaches in [25], it has a simpler network structure, a smaller number of parameters and is easier to train. We evaluate our method on large-scale deblurring datasets with complex motion. Results show that our method can produce better quality results than state-of-the-arts, both quantitatively and qualitatively.

Citations (1,040)

Summary

  • The paper introduces a scale-recurrent deblurring network that shares weights across scales, reducing model complexity and improving deblurring performance.
  • It leverages an encoder-decoder architecture with ResBlock modules and ConvLSTM to effectively capture multi-scale features.
  • Experimental results show superior PSNR and SSIM on the GOPRO and Köhler datasets, outperforming state-of-the-art methods.

Scale-recurrent Network for Deep Image Deblurring

The paper, "Scale-recurrent Network for Deep Image Deblurring" by Xin Tao et al., proposes a novel approach to address the problem of single image deblurring by introducing a Scale-recurrent Network (SRN-DeblurNet). This method builds upon the widely acknowledged coarse-to-fine strategy in image restoration but diverges by implementing a scale-recurrent mechanism, reducing network complexity and training demands while improving performance metrics on complex motion deblurring tasks.

Problem Context and Prior Work

Single image deblurring is a well-known challenge in computer vision, where the goal is to recover sharp images from blurry inputs caused by factors like camera shake and object motion. Traditional methods have relied on various image priors and constraints to tackle this inherently ill-posed problem, often at the expense of computational intensity and reliance on strict blur models. The advent of learning-based methods, especially CNNs, has alleviated some limitations by leveraging large-scale data to learn more effective deblurring features.

Contributions and Methodology

The proposed SRN-DeblurNet addresses two key issues in CNN-based deblurring frameworks: parameter stability across scales and efficient large-motion deblurring.

  1. Scale-recurrent Structure: Unlike traditional cascade networks which utilize independent parameters for each scale, the SRN-DeblurNet shares weights across scales. This innovative approach significantly reduces the model's parameter count and training complexity. The recurrent structure ensures stability, effectively using recurrent modules (in this case, ConvLSTM) to capture and transfer intermediate information across scales.
  2. Encoder-decoder ResBlock Network: The paper further enhances the deblurring process by integrating ResBlocks within an encoder-decoder architecture. By substituting traditional convolution layers with ResBlocks, the network benefits from improved convergence rates and more effective feature representation. The large receptive fields achieved by this structure are particularly advantageous for handling extensive motion blur.

Experimental Validation

The SRN-DeblurNet was trained and evaluated on a large-scale dataset synthesized from high-speed video frames, ensuring realistic blur patterns. Quantitative metrics such as PSNR and SSIM were used for evaluation, showing substantial improvements over state-of-the-art methods:

  • The proposed method outperforms prior works such as the multi-scale network by Nah et al. in terms of PSNR (30.10 vs 29.08) and SSIM (0.9323 vs 0.9135) on the GOPRO dataset.
  • On the K{\"o}hler dataset, the proposed network also demonstrated superior performance with notable improvements in deblurring quality.

Visual comparisons reaffirm the quantitative findings, with the SRN-DeblurNet effectively preserving finer image details and reducing artifacts compared to both traditional methods and recent neural network-based approaches.

Implications and Future Directions

The SRN-DeblurNet's reduced parameter complexity and superior performance demonstrate its potential for broader applications in image processing tasks beyond deblurring. The scale-recurrent mechanism can be particularly beneficial for tasks requiring multi-scale feature integration while maintaining model stability and reducing overfitting risks.

Future research may delve into extending this architecture to other domains such as super-resolution, image synthesis, and video processing, where multi-scale recurrent structures can significantly enhance performance. Additionally, exploring alternative recurrent modules and optimizing training methodologies to further exploit the advantages of shared weights across scales may yield even more robust models.

In summary, the SRN-DeblurNet represents a significant advancement in the field of image deblurring, offering a balanced approach between network complexity and deblurring efficacy. The methodological innovations presented in this paper provide a promising direction for future explorations in image restoration and beyond.