Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FD-GAN: Generative Adversarial Networks with Fusion-discriminator for Single Image Dehazing (2001.06968v2)

Published 20 Jan 2020 in cs.CV

Abstract: Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of dehazing, resulting in artifacts, color distortion and insufficient haze removal. To address this, we propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed Fusion-discriminator which takes frequency information as additional priors, our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts. Moreover, we synthesize a large-scale training dataset including various indoor and outdoor hazy images to boost the performance and we reveal that for learning-based dehazing methods, the performance is strictly influenced by the training data. Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yu Dong (14 papers)
  2. Yihao Liu (85 papers)
  3. He Zhang (236 papers)
  4. Shifeng Chen (29 papers)
  5. Yu Qiao (563 papers)
Citations (211)

Summary

  • The paper introduces an end-to-end dehazing framework that bypasses traditional parameter estimation to produce natural, artifact-free images.
  • The paper employs a novel fusion-discriminator that integrates both high- and low-frequency image information, enhancing the GAN's ability to differentiate real from generated outputs.
  • The paper validates its approach using a large-scale synthesized dataset, achieving significant gains in PSNR and SSIM over existing methods.

FD-GAN: Fusion-discriminator GANs for Image Dehazing

The paper "FD-GAN: Generative Adversarial Networks with Fusion-discriminator for Single Image Dehazing," presents an innovative approach to addressing the challenge of image dehazing using generative adversarial networks (GANs). The authors propose the FD-GAN framework, which stands out by employing a fusion-discriminator that integrates frequency information, enhancing the capability of the network to generate high-quality dehazed images. This approach aims to circumvent the traditional dehazing pipeline that often relies on the estimation of intermediate parameters such as medium transmission and atmospheric light—an estimation process that is notoriously prone to errors and can lead to artifacts and color distortions.

Main Contributions

  1. End-to-End Dehazing with GANs: The FD-GAN model utilizes a fully end-to-end architecture that bypasses the need for estimating intermediate dehazing parameters. This is a significant advancement as it simplifies the process and enhances the potential for generating more natural and artifact-free dehazed images.
  2. Fusion-Discriminator: A key innovation in this work is the fusion-discriminator, which is designed to consider both high-frequency (HF) and low-frequency (LF) components of the image. By incorporating this frequency information as additional priors, the discriminator can more effectively distinguish between real and generated images, facilitating higher fidelity in the dehazing results.
  3. Training Dataset Synthesis: To improve the performance of learning-based dehazing methods, the authors synthesize a large-scale training dataset that includes a diverse set of indoor and outdoor hazy images. The dataset is instrumental in demonstrating that the performance of dehazing algorithms is appositely tied to the quality and diversity of the training data.
  4. Quantitative and Qualitative Evaluation: The FD-GAN achieves state-of-the-art performance across synthetic and real-world image datasets. Through rigorous experimentation, the authors exhibit enhanced dehazing performance with significant improvements in PSNR and SSIM metrics compared to existing methods. Qualitatively, the dehazed images produced using the FD-GAN model display fewer color distortions and artifacts, aligning closer to true haze-free representations.

Implications and Future Research

This work holds practical implications for computer vision applications where clear visibility is crucial, such as autonomous driving, surveillance systems, and remote sensing. The success of the FD-GAN model is chiefly attributed to its novel discriminator architecture, which could inspire future research in advanced network designs for solving other complex inverse problems in computational imaging.

In theoretical terms, the work suggests a promising direction for GANs in image processing, specifically by showcasing how unconventional discriminator designs—augmented with domain-specific priors—can push the boundaries of what these networks can achieve. Future research might explore the adaptability of fusion-discriminator approaches across other image restoration tasks, such as image denoising or super-resolution.

In conclusion, FD-GAN introduces a robust framework leveraging GANs for the task of image dehazing, setting a precedent for integrating frequency domain information directly into learning pipelines. This paper lays a foundation for future advancements in both the design of generative models and the development of domain-specific data generation techniques to enhance model performance in image restoration tasks.