Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation (2103.00860v1)

Published 1 Mar 2021 in cs.CV

Abstract: This paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or even unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. We further present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters. Zero-DCE++ has a fast inference speed (1000/11 FPS on a single GPU/CPU for an image of size 1200*900*3) while keeping the enhancement performance of Zero-DCE. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our method to face detection in the dark are discussed. The source code will be made publicly available at https://li-chongyi.github.io/Proj_Zero-DCE++.html.

Citations (401)

Summary

  • The paper introduces Zero-DCE, which reframes low-light enhancement as pixel-wise curve estimation for flexible and efficient dynamic range mapping.
  • It employs four non-reference loss functions—spatial consistency, exposure control, color constancy, and illumination smoothness—to achieve visually pleasing results without paired training data.
  • Efficiency optimizations reduce the model to 10K parameters and 0.115G FLOPs, enabling real-time performance and improved low-light face detection.

Overview of Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement

The paper "Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation" discusses the development of Zero-Reference Deep Curve Estimation (Zero-DCE) for enhancing low-light images. The authors introduce a novel method to tackle low-light image enhancement through a zero-reference learning framework without the need for paired training data, which is a common requirement in most conventional and data-driven image enhancement techniques. This approach uses a lightweight deep neural network, DCE-Net, to estimate pixel-wise and higher-order curves for enhancing the dynamic range of input images, providing a significant advancement in computational efficiency and flexibility.

Key Methodological Contributions

Zero-DCE represents a paradigm shift in the field by reframing the image enhancement task as an image-specific curve estimation problem rather than relying on traditional image-to-image mappings. This formulation includes several noteworthy aspects:

  1. Curve Estimation and Network Design: The solution involves a specially designed quadratic function, termed the Light-Enhancement (LE) curve, which is capable of mapping input pixel values within an optimized dynamic range. This curve is extended from a simple form to a higher-order iteration to improve robustness against varying lighting conditions. The resulting LE curve is both monotonic and differentiable, enabling efficient computation and enhanced performance across multiple lighting conditions.
  2. Zero-Reference Learning: In contrast to GAN-based and CNN-based methods which require paired or unpaired datasets for training, Zero-DCE leverages non-reference loss functions to train its network. The paper details four non-reference losses— spatial consistency, exposure control, color constancy, and illumination smoothness—that implicitly guide the network towards producing visually pleasing enhancements without explicit ground truth references.
  3. Efficiency Optimizations: Zero-DCE introduces an accelerated version, Zero-DCE++, which employs depthwise separable convolutions and parameter sharing across iterative curve applications. These strategic modifications drastically reduce the model complexity to only 10K parameters and 0.115G FLOPs, achieving real-time inference speeds on standard hardware.

Experimental Results and Novel Insights

The fidelity of Zero-DCE is substantiated through extensive qualitative and quantitative evaluations against established benchmarks such as the NPE, LIME, and MEF datasets. The method exhibits top-tier performance in maintaining aesthetic quality and restoring visibility under challenging lighting conditions, with user studies also corroborating its superior perceptual results.

Zero-DCE further demonstrates the potential for improved face detection under low-light conditions. This enhancement suggests the model's practical applicability to real-world scenarios, potentially facilitating better outcomes in downstream computer vision tasks.

Implications and Future Directions

This paper's approach signifies a valuable step forward in computational photography and image processing. The implications span both theoretical perspectives by challenging established assumptions about required data inputs and practical applications in resource-constrained environments such as mobile devices. Looking ahead, this work prompts further research into zero-reference methods for broader types of image degradation and restoration tasks, potentially inspiring novel architectures that amplify learning efficiencies in unsupervised settings.

In conclusion, the work on Zero-DCE constitutes an important contribution to the domain of low-light image enhancement, offering new insights into efficient model design and training paradigms devoid of extensive labeled data, thus opening new avenues for advancement in AI-driven image processing.