Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deblurring by Realistic Blurring (2004.01860v2)

Published 4 Apr 2020 in cs.CV

Abstract: Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images do not necessarily model the genuine blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset.

Overview of the "Deblurring by Realistic Blurring" Paper

The paper, authored by Zhang et al., incurs a novel perspective on the persistent problem of image deblurring by focusing on the blurring process itself. The discussion evolves from the conventional methodology of synthesizing blurred images and using them as inputs to train deblurring models, and it transitions to a more nuanced approach that aspires to imitate real-world blurring conditions for more effective deblurring outcomes.

The authors introduce a two-part framework that leverages Generative Adversarial Networks (GANs): the learning-to-Blur GAN (BGAN) and the learning-to-DeBlur GAN (DBGAN). The aim is to construct more authentic blurring models to enhance the efficacy of deblurring algorithms, addressing the discrepancies between synthetically generated and real-world blurred images.

Methodology and Proposed Framework

The proposed solution employs two interconnected GAN models, with the BGAN being responsible for learning the complex state of real-world blurring. By utilizing unpaired sets of sharp and blurry images, the BGAN is trained to generate more realistic blurred representations. The pivotal innovation lies in blending synthetic data with real-world characteristics by incorporating a relativistic blur loss, which focuses on predicting the comparative realism between synthetic and authentic blurry images.

The outputs from the BGAN are fed into the DBGAN module, which has been adeptly structured to convert these blurred images back to their sharp states. The DBGAN's architecture incorporates several discussed strategies that have demonstrated success in previous deblurring methodologies, such as the exclusion of batch normalization for better performance and the inclusion of residual blocks to enhance learning from multi-level connections.

Numerical Results and Contributions

Zhang et al.’s experiments demonstrate that the proposed network consistently delivers superior performance when evaluated on both the proposed Real-World Blurred Image (RWBI) dataset and the established GOPRO dataset—achieving state-of-the-art quantitative benchmarks in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM).

The paper claims several key contributions:

  1. A dual-process framework that considers both the blurring and deblurring procedures to improve and robustify deblurring results.
  2. Introduction of the relativistic blur loss to minimize the reality gap between synthetic and actual blur instances.
  3. Creation of the RWBI dataset that expands the evaluation landscape for image deblurring tasks in practical scenarios.

Implications and Future Work

The implications of the proposed methodology extend into practical applications where image clarity and processing speed are crucial, such as in mobile photography, medical imaging, and autonomous vehicular perception systems. By mitigating the dependence on exact blur type assumptions, this approach promises broad applicability across varied blur-inducing scenarios.

The paper sets a foundation for further exploration into refining GAN-based models for other image restoration tasks. Future research may consider expanding the model to handle more complex scenarios of blurring, such as temporal variations in video sequences or minimally supervised adaptations to other unknown noise conditions. Additionally, evaluations on larger and more diverse datasets could further consolidate the robustness of this approach.

In conclusion, the paper offers a distinct and scientifically enriching advancement in the field of computational deblurring by challenging the conventional training paradigms and encouraging an overview-oriented evaluation of blurring processes. This framework opens exciting doors for both theoretical explorations and practical implementations in the expansive field of computer vision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kaihao Zhang (55 papers)
  2. Wenhan Luo (88 papers)
  3. Yiran Zhong (75 papers)
  4. Lin Ma (206 papers)
  5. Bjorn Stenger (14 papers)
  6. Wei Liu (1135 papers)
  7. Hongdong Li (172 papers)
Citations (321)