Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images (2111.13679v1)

Published 26 Nov 2021 in cs.CV, cs.GR, and eess.IV

Abstract: Neural Radiance Fields (NeRF) is a technique for high quality novel view synthesis from a collection of posed input images. Like most view synthesis methods, NeRF uses tonemapped low dynamic range (LDR) as input; these images have been processed by a lossy camera pipeline that smooths detail, clips highlights, and distorts the simple noise distribution of raw sensor data. We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range. By rendering raw output images from the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera viewpoint, we can manipulate focus, exposure, and tonemapping after the fact. Although a single raw image appears significantly more noisy than a postprocessed one, we show that NeRF is highly robust to the zero-mean distribution of raw noise. When optimized over many noisy raw inputs (25-200), NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images. As a result, our method, which we call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness.

Citations (329)

Summary

  • The paper introduces RawNeRF, which trains directly on linear noisy raw images to achieve robust HDR view synthesis under low-light conditions.
  • It modifies the traditional NeRF pipeline by bypassing the camera processing to preserve full scene detail and manage extreme luminance variations.
  • Empirical results show that RawNeRF outperforms conventional denoising networks with superior novel view rendering and effective exposure adjustments.

High Dynamic Range View Synthesis Using RawNeRF from Noisy Raw Images

The paper "NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images" innovatively adapts Neural Radiance Fields (NeRF) for processing high dynamic range (HDR) data, specifically targeting the challenges posed by noisy raw image inputs captured in low-light conditions. The proposed method, RawNeRF, departs from the conventional practice of processing low dynamic range (LDR) inputs by training directly on linear raw images, thus preserving full scene detail and allowing for flexible post-processing. This novel approach harnesses NeRF's robustness to noise, showcasing its capacity for effective HDR view synthesis.

NeRF's Enhanced Input Modification

RawNeRF revamps the input paradigm for NeRF by utilizing linear raw images that bypass the conventional camera pipeline, which often lossy compresses dynamic scenes by tone mapping and clipping. The raw images retain a zero-mean noise distribution, establishing a foundation on which NeRF can robustly train, even with substantial noise prevalent in nighttime environments or scenes with a vast contrast range. The robustness afforded by RawNeRF's methodology permits the extension of NeRF to environments previously unsuitable due to light variability.

Quantitative Results and Methodology

The paper explicates RawNeRF's efficacy through empirical results, demonstrating performance superior to both single-image and multi-image denoising networks. Training with between 25-200 noisy raw inputs, RawNeRF constructs a scene representation with enhanced novel view rendering capabilities, surpassing contemporary deep raw denoisers. This is particularly emphasized in scenarios requiring HDR where luminance levels drastically vary, thus showcasing RawNeRF's ability to outperform existing HDR synthesis techniques based on LDR inputs.

The technical approach involves leveraging reweighted loss functions adjusted for HDR data, allowing RawNeRF to maintain unbiased training across varying noise levels and dynamic ranges. The system further integrates learned exposure adjustments to address shutter speed miscalibration. Extensive comparisons reveal RawNeRF's competitive edge, particularly in wide-baseline static scenes where typical denoising networks would falter.

Theoretical and Practical Implications

The research expands the applicability of NeRF to HDR scenarios by accommodating the noise characteristics inherent in raw data, maintaining fidelity across both extremes of brightness. The ability to perform post-capture adjustments to focus, exposure, and tonemapping outcomes aligns with the flexibility traditionally associated with raw photography post-processing while extending these capabilities into the field of photorealistic view synthesis.

Future developments in AI could leverage aspects of RawNeRF to enhance computational photography and image-based modeling, particularly in the areas of real-time scene rendering and augmented reality. The incorporation of raw data in neural rendering frameworks could lead to improvements in generating realistic depictions of environments captured under non-ideal conditions, challenging the constraints posed by the inherent noise and dynamic range of past methodologies.

In conclusion, the paper effectively extends the boundaries of neural radiance fields by proposing RawNeRF, a method that, through innovative use of raw image data, improves upon both the denoising and HDR capabilities of traditional NeRF implementations. This advancement holds promise for subsequent endeavors in the exploitation of raw sensor data in neural rendering applications.