Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HDR image reconstruction from a single exposure using deep CNNs (1710.07480v1)

Published 20 Oct 2017 in cs.CV, cs.GR, and cs.LG

Abstract: Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

Citations (531)

Summary

  • The paper presents a deep CNN that reconstructs HDR images from a single exposure by accurately predicting details lost in saturated regions.
  • It leverages a hybrid autoencoder with encoder-decoder structure and skip-connections, enhanced by transfer learning on simulated HDR datasets.
  • Results demonstrate visually convincing HDR images produced within a second, significantly outperforming traditional inverse tone-mapping operators.

HDR Image Reconstruction from a Single Exposure Using Deep CNNs

The paper discusses a novel approach to reconstruct high dynamic range (HDR) images from a single low dynamic range (LDR) exposure using deep convolutional neural networks (CNNs). This research leverages deep learning to predict information lost in saturated image areas, overcoming the inherent limitations of camera sensors' dynamic range.

Methodology

The authors propose a fully convolutional neural network designed as a hybrid dynamic range autoencoder, tailored for the specific challenges of HDR reconstruction. This architecture includes an encoder network transforming the LDR input into a compact feature representation and a decoder that reconstructs the HDR image. A crucial feature of the network is the use of skip-connections, which allow for the optimal use of high-resolution details in the reconstruction process.

The training process incorporates a large dataset of existing HDR images augmented with simulated sensor saturation, ensuring robustness across various camera settings. Additionally, a transfer-learning approach is employed by pre-training the CNN on simulated HDR datasets created from the MIT Places database.

Results

The paper demonstrates the proposed method's ability to generate high-quality HDR images across a wide range of scenarios. These results are achieved within a second of processing time on modern hardware, offering significant improvements over existing inverse tone-mapping operators (iTMOs). A subjective evaluation on HDR displays confirms that the reconstructed images are visually convincing, with notable improvements in perceived quality compared to both traditional LDR images and results from established iTMOs.

Implications

Practically, this method provides a viable solution for enhancing LDR images to HDR without requiring multiple exposures or specialized equipment. This can be particularly beneficial for applications such as image-based lighting, exposure correction, and advanced post-processing in photographic and cinematographic contexts. Theoretically, the paper highlights how deep learning architectures, specifically designed to incorporate domain-specific transformations and loss functions, can excel in tasks like HDR reconstruction.

Future Directions

The research opens several pathways for future exploration. One area of interest might be expanding the approach to recover not only highlights but also details lost to quantization in shadows, potentially integrating techniques from super-resolution and de-noising fields. Additionally, addressing artifacts introduced by compression in consumer-grade camera images could further broaden the method's applicability. Another intriguing direction could be enhancing the architecture with generative adversarial networks (GANs), provided the stability issues are resolved to handle high-resolution outputs effectively.

In conclusion, this work presents a sophisticated application of CNNs for HDR image reconstruction that stands out for its methodological clarity and the quality of its results, signaling a step forward in image processing research.

Github Logo Streamline Icon: https://streamlinehq.com