Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Day-to-Night Image Synthesis for Training Nighttime Neural ISPs (2206.02715v1)

Published 6 Jun 2022 in cs.CV and eess.IV

Abstract: Many flagship smartphone cameras now use a dedicated neural image signal processor (ISP) to render noisy raw sensor images to the final processed output. Training nightmode ISP networks relies on large-scale datasets of image pairs with: (1) a noisy raw image captured with a short exposure and a high ISO gain; and (2) a ground truth low-noise raw image captured with a long exposure and low ISO that has been rendered through the ISP. Capturing such image pairs is tedious and time-consuming, requiring careful setup to ensure alignment between the image pairs. In addition, ground truth images are often prone to motion blur due to the long exposure. To address this problem, we propose a method that synthesizes nighttime images from daytime images. Daytime images are easy to capture, exhibit low-noise (even on smartphone cameras) and rarely suffer from motion blur. We outline a processing framework to convert daytime raw images to have the appearance of realistic nighttime raw images with different levels of noise. Our procedure allows us to easily produce aligned noisy and clean nighttime image pairs. We show the effectiveness of our synthesis framework by training neural ISPs for nightmode rendering. Furthermore, we demonstrate that using our synthetic nighttime images together with small amounts of real data (e.g., 5% to 10%) yields performance almost on par with training exclusively on real nighttime images. Our dataset and code are available at https://github.com/SamsungLabs/day-to-night.

Citations (19)

Summary

  • The paper introduces a novel transformation pipeline that synthesizes realistic nighttime images from daytime captures to train neural ISPs.
  • The method simulates nighttime conditions through controlled exposure adjustment, relighting, and noise modeling, achieving near-real image quality.
  • Experiments show that training with synthetic data, supplemented by 5-10% real images, yields comparable performance to models trained on full real datasets.

Overview of "Day-to-Night Image Synthesis for Training Nighttime Neural ISPs"

The paper "Day-to-Night Image Synthesis for Training Nighttime Neural ISPs" addresses a significant challenge in the domain of smartphone photography: the training of neural image signal processors (ISPs) for effective low-light image processing. As smartphone cameras increasingly rely on machine learning to process raw sensor data, acquiring paired nightmode data becomes crucial yet labor-intensive. This paper proposes a novel methodology for synthesizing nighttime images from readily available daytime images, thus simplifying the data acquisition process for training these neural ISPs.

Methodology

The authors present a sophisticated framework for converting daytime images to nighttime images. The process begins with capturing high-quality daytime images, which inherently have low noise levels. The core of the proposed method involves transforming these daytime images into synthetic nighttime images by simulating both environmental light conditions and sensor calibrations typical of night environments. This transformation is executed through several key steps:

  • Illumination Removal and Exposure Adjustment: The initial step involves white balancing to remove daytime illumination, followed by lowering the exposure levels to simulate nighttime brightness.
  • Relighting with Nighttime Illuminants: The method introduces realistic night illuminants sampled from precomputed dictionaries of nighttime lighting conditions.
  • Noise Addition: Noise consistent with high ISO nighttime conditions is synthetically added using a heteroscedastic Gaussian model, thus simulating the challenges faced by the sensor in low-light.

By implementing these transformations, the method produces synthetic nighttime image pairs that align cleanly for training deep neural networks designated as nightmode neural ISPs.

Experimental Evaluation

The proposed framework is extensively validated through experiments. The authors train neural ISPs using the synthesized images and demonstrate the efficacy of the images in comparison to models trained using real nighttime data. The approach is quantitatively evaluated using metrics such as PSNR, SSIM, and Δ\DeltaE values, confirming the capability of the synthetic data to closely approximate real nighttime scenes. The results reveal that training on synthetic data yields performance almost on par with that obtained from real data, especially when supplemented with a small fraction (5% to 10%) of real nighttime data.

Implications and Future Work

This research carries significant implications for practical and theoretical advancements in neural ISPs. Practical benefits include a reduction in the resources and effort required for capturing extensive nighttime datasets, potentially accelerating the deployment of nightmode capabilities across a wider array of camera-enabled devices. Theoretically, the paper contributes to the domain of image synthesis and domain adaptation, showcasing an effective use case merging the two.

Future research directions may include refining the synthesis process by incorporating advanced noise modeling to bridge the gap between synthetic and real nighttime images further. Additionally, exploring the applicability of this method across diverse camera sensors and varied environmental conditions could enhance the robustness of the ISP training process. Overall, this paper provides a strong foundation for improving the imaging capabilities of neural ISPs in low-light conditions, paving the way for further developments in mobile photography and beyond.

Github Logo Streamline Icon: https://streamlinehq.com