Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
92 tokens/sec
Gemini 2.5 Pro Premium
50 tokens/sec
GPT-5 Medium
22 tokens/sec
GPT-5 High Premium
21 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
459 tokens/sec
Kimi K2 via Groq Premium
230 tokens/sec
2000 character limit reached

Low-Light Image Enhancement with Normalizing Flow (2109.05923v1)

Published 13 Sep 2021 in eess.IV and cs.CV

Abstract: To enhance low-light images to normally-exposed ones is highly ill-posed, namely that the mapping relationship between them is one-to-many. Previous works based on the pixel-wise reconstruction losses and deterministic processes fail to capture the complex conditional distribution of normally exposed images, which results in improper brightness, residual noise, and artifacts. In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model. An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. In this way, the conditional distribution of the normally exposed images can be well modeled, and the enhancement process, i.e., the other inference direction of the invertible network, is equivalent to being constrained by a loss function that better describes the manifold structure of natural images during the training. The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.

Citations (288)

Summary

  • The paper introduces LLFlow, which leverages conditional normalizing flow to model the one-to-many mapping between low-light and well-exposed images.
  • It demonstrates significant improvements in structural and visual consistency on benchmarks like LOL and VE-LOL, outperforming traditional deterministic methods.
  • The approach incorporates a color map module inspired by Retinex theory to reduce color saturation issues and minimize noise and artifacts.

Low-Light Image Enhancement Using Normalizing Flow Models

The paper entitled "Low-Light Image Enhancement with Normalizing Flow" presents a compelling approach to tackling the issue of enhancing low-light images to match the quality of normally exposed photos. The research scrutinizes the inadequacies of traditional pixel-wise algorithms in capturing the conditional distribution complexities between low-light images and their well-exposed counterparts. By leveraging the capabilities of normalizing flow, the paper introduces an innovative framework that models this one-to-many relationship more effectively.

The core contribution of this paper lies in its application of normalizing flow models—typically used in various fields such as computational photography and image super-resolution—to the problem of low-light image enhancement. The authors propose a method called LLFlow, which employs an invertible network trained with negative log-likelihood (NLL) loss. By doing so, the model learns the probabilistic distribution of normally exposed images conditioned on their low-light inputs. This paradigm shift from the traditional deterministic models allows for a better adjustment of illumination and a substantial reduction in noise and artifacts.

Key Findings and Methodology

  1. Conditional Normalizing Flow: The authors argue that existing methods using deterministic algorithms fail to exploit the manifold structure of natural images. LLFlow, however, uses conditional normalizing flow to learn a more nuanced distribution, offering a better perceptual quality of the enhanced images.
  2. Improved Structural and Visual Consistency: The method shows significant improvement in maintaining structural details and measuring visual distance in images, which is evident from the quantitative and qualitative results on benchmark datasets like LOL and VE-LOL.
  3. Color Map Incorporation: Inspired by Retinex theory, the paper introduces a module that extracts color maps as a prior for the model, effectively dealing with color saturation and minimizing distortion.
  4. Numerical Results: The model outperformed state-of-the-art techniques on various metrics including PSNR, SSIM, and LPIPS, underscoring its effectiveness in enhancing low-light images to well-exposed ones with less noise and artifacts.

Implications and Future Work

The implications of this work are manifold. The proposed framework enhances a range of applications, from photography in suboptimal lighting conditions to broader uses in computational vision tasks that rely on high-quality image contrasts. The methodology sets a precedent for integrating probabilistic models with computer vision tasks that traditionally rely on deterministic processes, pointing towards more robust solutions.

Future work could explore more extensive applications of normalizing flow in image processing tasks that involve high variability conditions or unbalanced datasets. Additionally, further refinement of the learning algorithms or exploration of hybrid models combining normalizing flow with other generative models might yield even more promising results for specific contexts or datasets.

Overall, this paper progresses the field of image enhancement by providing a sophisticated model that more accurately represents the manifold structure of exposures, thereby delivering qualitatively and quantitatively superior images.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.