Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptation for Image Dehazing (2005.04668v1)

Published 10 May 2020 in cs.CV

Abstract: Image dehazing using learning-based methods has achieved state-of-the-art performance in recent years. However, most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift. To address this issue, we propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules. Specifically, we first apply a bidirectional translation network to bridge the gap between the synthetic and real domains by translating images from one domain to another. And then, we use images before and after translation to train the proposed two image dehazing networks with a consistency constraint. In this phase, we incorporate the real hazy image into the dehazing training via exploiting the properties of the clear image (e.g., dark channel prior and image gradient smoothing) to further improve the domain adaptivity. By training image translation and dehazing network in an end-to-end manner, we can obtain better effects of both image translation and dehazing. Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.

Citations (286)

Summary

  • The paper presents a novel framework that integrates scene depth-informed image translation with domain-specific dehazing networks to address domain shift.
  • It employs bidirectional translation and consistency constraints to transform synthetic images and ensure robust dehazing performance.
  • Experimental results on synthetic and real datasets demonstrate significant improvements in PSNR and SSIM, benefiting real-world applications.

Domain Adaptation for Image Dehazing: A Comprehensive Analysis

The research paper "Domain Adaptation for Image Dehazing" by Yuanjie Shao et al. addresses the challenge of domain shift in image dehazing models, a critical issue in computer vision where synthetic training sets do not generalize well to real-world applications. The authors propose an innovative framework that combines image translation and dehazing networks to effectively bridge the gap between synthetic and real hazy images.

Methodology Overview

The proposed framework uniquely integrates two distinct modules: an image translation module and domain-specific dehazing modules. The image translation module, comprising bidirectional translation networks, is employed to convert images from synthetic to real domains and vice versa, thus minimizing domain discrepancies. This translation is crucial for enhancing the robustness and domain adaptivity of dehazing models. Notably, the synthetic to real translation network incorporates scene depth information via a Spatial Feature Transform (SFT) layer, a strategic enhancement that leverages depth conditional features to better simulate real-world hazing effects.

Upon transforming the images, the authors employ two dehazing modules tailored for each domain. These networks are trained using both the original and translated images, with a consistency constraint ensuring their coherence. This dual-domain approach allows for mutual enhancement of image translation and dehazing performances.

Numerical Results and Claims

Experimentally, the framework exhibits superior performance across both synthetic and real datasets. On the synthetic STOS dataset, the proposed approach achieves a PSNR of 27.76 and SSIM of 0.93, significantly outperforming contemporaries like EPDN, which registers a PSNR of 23.82. Moreover, the qualitative analysis indicates that the method successfully retains image details and reduces color distortions, a prevalent issue in many existing algorithms.

Implications and Future Directions

The implications of this research are twofold. Practically, the framework represents a substantial advancement in real-world image processing, crucial for applications in autonomous driving and surveillance systems where visibility is often compromised by environmental conditions. Theoretically, it provides a robust architecture for tackling domain adaptation challenges, a persistent obstacle in deploying learning-based models outside controlled environments.

Looking ahead, exploring the integration of more complex environmental variables and extending this framework to video processing could yield further improvements. Additionally, employing unsupervised or semi-supervised methods to decrease dependency on synthetic datasets remains an exciting avenue for future exploration.

In conclusion, this paper offers a detailed and effective solution to the domain adaptation problem in image dehazing, characterized by its methodological novelty and significant empirical successes. The integration of depth-informed image translation and dual-domain training establishes a strong precedent for addressing similar challenges in other computer vision tasks.