Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gated Context Aggregation Network for Image Dehazing and Deraining (1811.08747v2)

Published 21 Nov 2018 in cs.CV

Abstract: Image dehazing aims to recover the uncorrupted content from a hazy image. Instead of leveraging traditional low-level or handcrafted image priors as the restoration constraints, e.g., dark channels and increased contrast, we propose an end-to-end gated context aggregation network to directly restore the final haze-free image. In this network, we adopt the latest smoothed dilation technique to help remove the gridding artifacts caused by the widely-used dilated convolution with negligible extra parameters, and leverage a gated sub-network to fuse the features from different levels. Extensive experiments demonstrate that our method can surpass previous state-of-the-art methods by a large margin both quantitatively and qualitatively. In addition, to demonstrate the generality of the proposed method, we further apply it to the image deraining task, which also achieves the state-of-the-art performance. Code has been made available at https://github.com/cddlyf/GCANet.

Citations (538)

Summary

  • The paper introduces GCANet, an end-to-end network that directly predicts haze-free images without relying on traditional image priors.
  • It utilizes smoothed dilated convolutions to improve spatial context and reduce artifacts, yielding higher PSNR and SSIM on benchmarks.
  • The gated sub-network fuses multi-level features, enabling robust performance in both dehazing and deraining tasks.

Gated Context Aggregation Network for Image Dehazing and Deraining

The paper introduces the Gated Context Aggregation Network (GCANet), an architecture designed for image dehazing and deraining, which presents a notable contribution to the field of computer vision, particularly in removing atmospheric and synthetic disturbances from images. The approach diverges from traditional methods that rely on handcrafted image priors, opting instead for an end-to-end learning-based model.

Key Contributions

GCANet primarily addresses the challenges in existing dehazing techniques, which often require estimating transmission maps or atmospheric light—factors that are typically unknown and difficult to obtain. By leveraging a deep learning model, GCANet circumvents these obstacles, offering a direct prediction of the haze-free image.

Key elements of GCANet include:

  • Smoothed Dilated Convolution: To enhance context aggregation without sacrificing spatial resolution, the authors employ the latest smoothed dilation technique, mitigating gridding artifacts—a common problem in traditional dilated convolutions.
  • Gated Sub-network: This component fuses features from various levels, attributing weights based on their importance and combining them for improved restoration accuracy.

Experimental Results

The effectiveness of GCANet is underscored through rigorous experimentation. On the RESIDE benchmark, GCANet surpasses existing state-of-the-art dehazing methods both qualitatively and quantitatively. For instance, it achieves significant improvements in PSNR and SSIM values, indicating superior clarity and structural preservation in the processed images.

Moreover, the model's generality is validated through its application to image deraining. Despite being designed for dehazing, GCANet achieves commendable performance on deraining tasks, outperforming several contemporary models.

Implications

The adoption of a deep learning approach for these tasks represents a shift towards more adaptive and robust solutions in image processing fields. GCANet's performance in both dehazing and deraining tasks hints at broader applicability in other similar domains such as denoising or debris removal from images.

Future Directions

In future work, potential developments could include integration with more sophisticated loss functions or expansion into video processing tasks. This expansion could harness temporal continuity to further enhance restoration tasks, building on the successes achieved in static image processing.

Overall, GCANet reflects significant progress in addressing complex image restoration challenges, moving towards more comprehensive and universally applicable solutions in computer vision.