Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contrastive Learning for Compact Single Image Dehazing (2104.09367v1)

Published 19 Apr 2021 in cs.CV and cs.AI

Abstract: Single image dehazing is a challenging ill-posed problem due to the severe information degeneration. However, existing deep learning based dehazing methods only adopt clear images as positive samples to guide the training of dehazing network while negative information is unexploited. Moreover, most of them focus on strengthening the dehazing network with an increase of depth and width, leading to a significant requirement of computation and memory. In this paper, we propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples, respectively. CR ensures that the restored image is pulled to closer to the clear image and pushed to far away from the hazy image in the representation space. Furthermore, considering trade-off between performance and memory storage, we develop a compact dehazing network based on autoencoder-like (AE) framework. It involves an adaptive mixup operation and a dynamic feature enhancement module, which can benefit from preserving information flow adaptively and expanding the receptive field to improve the network's transformation capability, respectively. We term our dehazing network with autoencoder and contrastive regularization as AECR-Net. The extensive experiments on synthetic and real-world datasets demonstrate that our AECR-Net surpass the state-of-the-art approaches. The code is released in https://github.com/GlassyWu/AECR-Net.

Contrastive Learning for Compact Single Image Dehazing

The paper "Contrastive Learning for Compact Single Image Dehazing" introduces a novel approach to tackle the ill-posed problem of single image dehazing. Traditional deep learning methods for dehazing typically rely on clear images as positive samples to guide neural network training, but they fail to leverage negative samples, namely hazy images, which could provide valuable information for improving the dehazing process. Furthermore, many existing models address dehazing by significantly increasing the network's depth and width, resulting in high computational and memory demands. This research proposes a new methodology employing contrastive regularization (CR) to make more efficient use of both positive and negative samples in representation learning.

CR is built upon contrastive learning principles, ensuring that the outputs from the dehazing network are closer to the clear images while being furthest from hazy images in the representation space. This approach not only enhances dehazing performance but also reduces artifacts and color distortions commonly seen in methods relying solely on reconstruction loss.

The authors introduce a compact yet effective autoencoder-like (AE) network architecture, AECR-Net, specifically designed to balance performance with computational efficiency. Key components of AECR-Net include an adaptive mixup operation that enhances feature preservation across the network and a dynamic feature enhancement (DFE) module based on deformable convolution that dynamically extends the receptive field for capturing more robust spatial features. These additions equip the network with increased transformation capabilities without excessive resource requirements.

Experimental results demonstrate that AECR-Net outperforms state-of-the-art dehazing techniques on both synthetic and real-world datasets. Notably, AECR-Net achieves a PSNR of 37.17 and an SSIM of 0.9901 on the SOTS synthetic dataset with fewer parameters than many competitors. On real-world datasets like Dense-Haze and NH-HAZE, the proposed method maintains superior performance, reinforcing its robustness and applicability in diverse scenarios.

This paper's implication is twofold: practically, it presents a highly efficient model suitable for deployment in resource-constrained environments, such as mobile devices; theoretically, it paves the way for broader applications of contrastive learning in low-level vision tasks beyond traditional high-level applications.

Future work could explore the integration of more sophisticated representation learning frameworks and evaluate the generalizability of CR and AE-like architectures for other image restoration tasks. Additionally, understanding the impact of contrastive sample selection and its adaptation to changing environmental conditions could further refine dehazing performance. Thus, this paper contributes significantly to the ongoing development of efficient and effective methods for image restoration and enhancement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haiyan Wu (18 papers)
  2. Yanyun Qu (39 papers)
  3. Shaohui Lin (45 papers)
  4. Jian Zhou (262 papers)
  5. Ruizhi Qiao (18 papers)
  6. Zhizhong Zhang (42 papers)
  7. Yuan Xie (188 papers)
  8. Lizhuang Ma (145 papers)
Citations (503)
Github Logo Streamline Icon: https://streamlinehq.com