Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network (2006.16829v1)

Published 30 Jun 2020 in cs.CV

Abstract: In this paper, we study two challenging and less-touched problems in single image dehazing, namely, how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and a image collection (untrained). An unsupervised neural network will avoid the intensive labor collection of hazy-clean image pairs, and an untrained model is a ``real'' single image dehazing approach which could remove haze based on only the observed hazy image itself and no extra images is used. Motivated by the layer disentanglement idea, we propose a novel method, called you only look yourself (\textbf{YOLY}) which could be one of the first unsupervised and untrained neural networks for image dehazing. In brief, YOLY employs three jointly subnetworks to separate the observed hazy image into several latent layers, \textit{i.e.}, scene radiance layer, transmission map layer, and atmospheric light layer. After that, these three layers are further composed to the hazy image in a self-supervised manner. Thanks to the unsupervised and untrained characteristics of YOLY, our method bypasses the conventional training paradigm of deep models on hazy-clean pairs or a large scale dataset, thus avoids the labor-intensive data collection and the domain shift issue. Besides, our method also provides an effective learning-based haze transfer solution thanks to its layer disentanglement mechanism. Extensive experiments show the promising performance of our method in image dehazing compared with 14 methods on four databases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Boyun Li (5 papers)
  2. Yuanbiao Gou (7 papers)
  3. Shuhang Gu (56 papers)
  4. Jerry Zitao Liu (2 papers)
  5. Joey Tianyi Zhou (116 papers)
  6. Xi Peng (115 papers)
Citations (178)

Summary

Unsupervised and Untrained Image Dehazing: The YOLY Approach

The paper presents a novel and distinctive approach to single image dehazing in the form of an unsupervised and untrained neural network. Dubbed "You Only Look Yourself" (YOLY), this method diverges from the conventional supervised learning paradigms by bypassing the need for paired hazy-clean datasets and extensive neural network training on collections of images. Instead, YOLY achieves image dehazing through the decomposition of a single hazy image into its constitutive layers: scene radiance, transmission map, and atmospheric light.

Method Overview

The YOLY framework is anchored on layer disentanglement, executed through three interconnected subnetworks:

  1. J-Net: Responsible for estimating the clean image, leveraging the property relations between brightness and saturation derived from color attenuation priors.
  2. T-Net: Focuses on estimating the transmission map using the structural convolutional approach, albeit without explicit priors or supervised loss except for the self-supervision by the overall structure.
  3. A-Net: Predicts atmospheric light through a variational auto-encoder, which assumes a latent Gaussian distribution and fits using variational inference.

The novel approach allows for image reconstruction in a self-supervised manner that mirrors the atmospheric scattering physical model, typically used in hazy image synthesis.

Numerical Performance

The paper reports compelling performance metrics across synthetic and real-world benchmarks, specifically against 14 comparative methods spanning supervised, prior-based, and other unsupervised algorithms. YOLY displays competitive PSNR and SSIM figures, overtaking most unsupervised methods, which emphasizes its robust performance even in challenging scenarios presented by single-image tasks.

Implications and Future Work

YOLY offers a computational efficiency by eliminating the need for training on large-scale datasets, which presents practical advantages in scenarios where collecting comprehensive paired data is not feasible. Furthermore, the authors introduce a haze transfer capability, suggesting potential for improved haze synthesis processes, advancing beyond manually specified parameters.

The theoretical implication is significant as it expands the boundary of what is achievable with unsupervised learning and deep neural networks in computer vision tasks. Future directions may consider enhancing this framework by integrating more sophisticated disentanglement techniques or exploring its adaptability to other complex visual tasks, such as video dehazing or other atmospheric interference challenges.

In conclusion, the paper makes clear that unsupervised and untrained methodologies, such as YOLY, hold promise for image restoration endeavors, reducing dependencies on labeled data and extensive pre-training, which can establish a new paradigm in deep-learning-based visual applications.