Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution (2308.01738v4)

Published 3 Aug 2023 in cs.CV

Abstract: Visibility in hazy nighttime scenes is frequently reduced by multiple factors, including low light, intense glow, light scattering, and the presence of multicolored light sources. Existing nighttime dehazing methods often struggle with handling glow or low-light conditions, resulting in either excessively dark visuals or unsuppressed glow outputs. In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions. To handle glow effects, our framework learns from the rendered glow pairs. Specifically, a light source aware network is proposed to detect light sources of night images, followed by the APSF (Atmospheric Point Spread Function)-guided glow rendering. Our framework is then trained on the rendered images, resulting in glow suppression. Moreover, we utilize gradient-adaptive convolution, to capture edges and textures in hazy scenes. By leveraging extracted edges and textures, we enhance the contrast of the scene without losing important structural details. To boost low-light intensity, our network learns an attention map, then adjusted by gamma correction. This attention has high values on low-light regions and low values on haze and glow regions. Extensive evaluation on real nighttime haze images, demonstrates the effectiveness of our method. Our experiments demonstrate that our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13% on GTA5 nighttime haze dataset. Our data and code is available at https://github.com/jinyeying/nighttime_dehaze.

Citations (33)

Summary

  • The paper introduces APSF-guided glow rendering to effectively suppress unnatural glow effects in nighttime haze images.
  • The paper proposes a novel gradient-adaptive convolution that preserves edges and textures while enhancing low-light regions.
  • The paper achieves a 13% PSNR improvement on the GTA5 nighttime dataset, highlighting its potential for autonomous driving and surveillance.

Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution

The paper "Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution" presents a method for improving the visibility of nighttime images affected by haze. It addresses challenges such as intense glow, low light, and light scattering, which are not tackled effectively by existing methods. This paper proposes a learning-based approach, incorporating APSF-guided glow rendering and gradient-adaptive convolution, to improve the quality of nighttime image dehazing.

Key Contributions

  1. APSF-Guided Glow Rendering: This paper introduces a systematic way to handle glow effects using the Atmospheric Point Spread Function (APSF) to render glow in images. A light source aware network is leveraged to detect light sources in images, followed by APSF-guided glow rendering. The framework is trained on these synthetically rendered glow images, allowing it to effectively suppress unnatural glow effects in real nighttime haze images.
  2. Gradient-Adaptive Convolution: To enhance low-light regions while preserving edges and textures, the authors propose a novel gradient-adaptive convolution. This convolution technique captures important edges and textures within hazy images, preventing loss of structural details and ensuring enhanced contrast across the scenes. The convolution adapts through extracted image gradients and bilateral kernel operations, enabling it to extract image details that are typically degraded in conventional dehazing approaches.
  3. Attention-Based Low-Light Enhancement: The approach incorporates an attention mechanism to identify and enhance low-light areas specifically. By learning an attention map and applying gamma correction, this method improves visibility in dark areas without affecting regions that already have sufficient lighting.

Experiments and Results

The experimental evaluations demonstrate the effectiveness of this proposed method on real nighttime haze images. One salient result is a Peak Signal-to-Noise Ratio (PSNR) of 30.38 decibels, outperforming recent methodologies by 13% on the GTA5 nighttime haze dataset. Such results assert the proficiency of the proposed approach in managing complex lighting and atmospheric interferences that manifest in nighttime environments.

Implications and Future Work

The implications of this research are noteworthy for fields demanding accurate vision in suboptimal lighting conditions, such as autonomous driving, drone navigation, and surveillance systems. In these domains, improved nighttime vision can significantly impact performance and safety.

This paper opens up several avenues for future research. While APSF-guided glow rendering shows impressive results, exploring other illumination models or enhancing the adaptability of the gradient-adaptive convolution technique could offer further improvements. Additionally, integrating this approach with real-world applications in autonomous systems could help refine these techniques and adapt them to dynamically changing environments.

In summary, this paper makes valuable contributions to the ongoing effort in computer vision to bridge the gap between daytime and nighttime imaging capabilities. It provides an important step forward in understanding and processing the complexities of nighttime haze images, with promising potential for both academic inquiry and practical applications in artificial intelligence and machine vision.