- The paper introduces a dual neural radiance field framework that uses SDF and density fields to boost both geometric detail and realistic view synthesis.
- The approach employs multi-resolution feature grids and geometry-guided sampling to optimize rendering fidelity and training efficiency.
- Experiments on NeuralRGBD and Replica datasets show that Du-NeRF sets new benchmarks in accuracy, PSNR, and SSIM while reducing shape-radiance ambiguity.
Introduction
A Dual Neural Radiance Field (Du-NeRF) approach has been developed for 3D reconstruction and new view synthesis, specifically tailored to address the complexities of indoor environments. This innovative framework makes use of two geometric fields derived from the Signed Distance Function (SDF) and has demonstrated significant advancements in the quality of geometry reconstruction alongside improved new view rendering capabilities. Notably, the model incorporates self-supervision techniques to decouple a view-independent component from the density field, which, when used as a label for training the SDF field, reduces shape-radiance ambiguity and enhances the learning process by integrating geometry and color cues.
Method
At the core of Du-NeRF, the utilization of two geometric fields—a SDF based field for geometric structuring and a density field optimized for rendering—empowers the system to excel simultaneously in both reconstructing refined geometric details and synthesizing realistic views. Depth images are introduced for supplementary network training, reinforcing the facility of the dual fields.
This approach innovates with a multi-resolution feature grid to improve efficiency, which is particularly critical given the extensive data involved in indoor scene processing. The method brings forth a unique blend of geometry-guided sampling and hierarchical volume rendering strategies, optimizing point sampling near object surfaces to facilitate rendering fidelity.
Remarkably, self-supervised color decomposition is a pivotal component, enabling the disentanglement of color into view-independent and view-dependent elements. The method effectively leverages view-invariant color to guide geometric learning, subsequently refining the 3D reconstruction in multi-view consistent manners and overcoming the challenges posed by areas that do not maintain multi-view color consistency.
Experiments
Extensive experiments were conducted across various synthetic and real-world datasets, namely the NeuralRGBD and Replica datasets. The proposed Du-NeRF method manifested a substantial improvement over state-of-the-art methods on all fronts. It has been shown to achieve the highest accuracies in 3D reconstruction metrics while also delivering superior view synthesis results, characterized by high PSNR and SSIM scores and low LPIPS errors.
Notably, individual experiments confirmed the efficacy of the dual-field structure in boosting both reconstruction and rendering performance, while further tests attested to the benefits of self-supervised color decomposition in enhancing the overall quality of the reconstructions. The system's design ensures that separate representations for color and geometry enhance each other, significantly elevating the end results.
Conclusion
The dual neural radiance field technique introduced constitutes a robust method for the simultaneous enhancement of 3D reconstruction and view synthesis specific to indoor environments. It sets a new benchmark for quality and realism by effectively resolving the intricate interplay between geometry and color information. While the current implementation delivers impressive results, further research is warranted to explore the application potential within limited-data scenarios and investigate the impact of image blurriness on system performance.