Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

3D Reconstruction and New View Synthesis of Indoor Environments based on a Dual Neural Radiance Field (2401.14726v2)

Published 26 Jan 2024 in cs.CV and cs.GR

Abstract: Simultaneously achieving 3D reconstruction and new view synthesis for indoor environments has widespread applications but is technically very challenging. State-of-the-art methods based on implicit neural functions can achieve excellent 3D reconstruction results, but their performances on new view synthesis can be unsatisfactory. The exciting development of neural radiance field (NeRF) has revolutionized new view synthesis, however, NeRF-based models can fail to reconstruct clean geometric surfaces. We have developed a dual neural radiance field (Du-NeRF) to simultaneously achieve high-quality geometry reconstruction and view rendering. Du-NeRF contains two geometric fields, one derived from the SDF field to facilitate geometric reconstruction and the other derived from the density field to boost new view synthesis. One of the innovative features of Du-NeRF is that it decouples a view-independent component from the density field and uses it as a label to supervise the learning process of the SDF field. This reduces shape-radiance ambiguity and enables geometry and color to benefit from each other during the learning process. Extensive experiments demonstrate that Du-NeRF can significantly improve the performance of novel view synthesis and 3D reconstruction for indoor environments and it is particularly effective in constructing areas containing fine geometries that do not obey multi-view color consistency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhenyu Bao (8 papers)
  2. Guibiao Liao (7 papers)
  3. Zhongyuan Zhao (29 papers)
  4. Kanglin Liu (16 papers)
  5. Qing Li (430 papers)
  6. Guoping Qiu (61 papers)
Citations (2)

Summary

  • The paper introduces a dual neural radiance field framework that uses SDF and density fields to boost both geometric detail and realistic view synthesis.
  • The approach employs multi-resolution feature grids and geometry-guided sampling to optimize rendering fidelity and training efficiency.
  • Experiments on NeuralRGBD and Replica datasets show that Du-NeRF sets new benchmarks in accuracy, PSNR, and SSIM while reducing shape-radiance ambiguity.

Introduction

A Dual Neural Radiance Field (Du-NeRF) approach has been developed for 3D reconstruction and new view synthesis, specifically tailored to address the complexities of indoor environments. This innovative framework makes use of two geometric fields derived from the Signed Distance Function (SDF) and has demonstrated significant advancements in the quality of geometry reconstruction alongside improved new view rendering capabilities. Notably, the model incorporates self-supervision techniques to decouple a view-independent component from the density field, which, when used as a label for training the SDF field, reduces shape-radiance ambiguity and enhances the learning process by integrating geometry and color cues.

Method

At the core of Du-NeRF, the utilization of two geometric fields—a SDF based field for geometric structuring and a density field optimized for rendering—empowers the system to excel simultaneously in both reconstructing refined geometric details and synthesizing realistic views. Depth images are introduced for supplementary network training, reinforcing the facility of the dual fields.

This approach innovates with a multi-resolution feature grid to improve efficiency, which is particularly critical given the extensive data involved in indoor scene processing. The method brings forth a unique blend of geometry-guided sampling and hierarchical volume rendering strategies, optimizing point sampling near object surfaces to facilitate rendering fidelity.

Remarkably, self-supervised color decomposition is a pivotal component, enabling the disentanglement of color into view-independent and view-dependent elements. The method effectively leverages view-invariant color to guide geometric learning, subsequently refining the 3D reconstruction in multi-view consistent manners and overcoming the challenges posed by areas that do not maintain multi-view color consistency.

Experiments

Extensive experiments were conducted across various synthetic and real-world datasets, namely the NeuralRGBD and Replica datasets. The proposed Du-NeRF method manifested a substantial improvement over state-of-the-art methods on all fronts. It has been shown to achieve the highest accuracies in 3D reconstruction metrics while also delivering superior view synthesis results, characterized by high PSNR and SSIM scores and low LPIPS errors.

Notably, individual experiments confirmed the efficacy of the dual-field structure in boosting both reconstruction and rendering performance, while further tests attested to the benefits of self-supervised color decomposition in enhancing the overall quality of the reconstructions. The system's design ensures that separate representations for color and geometry enhance each other, significantly elevating the end results.

Conclusion

The dual neural radiance field technique introduced constitutes a robust method for the simultaneous enhancement of 3D reconstruction and view synthesis specific to indoor environments. It sets a new benchmark for quality and realism by effectively resolving the intricate interplay between geometry and color information. While the current implementation delivers impressive results, further research is warranted to explore the application potential within limited-data scenarios and investigate the impact of image blurriness on system performance.