Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Inverse Rendering from Propagating Light (2506.05347v1)

Published 5 Jun 2025 in cs.CV

Abstract: We present the first system for physically based, neural inverse rendering from multi-viewpoint videos of propagating light. Our approach relies on a time-resolved extension of neural radiance caching -- a technique that accelerates inverse rendering by storing infinite-bounce radiance arriving at any point from any direction. The resulting model accurately accounts for direct and indirect light transport effects and, when applied to captured measurements from a flash lidar system, enables state-of-the-art 3D reconstruction in the presence of strong indirect light. Further, we demonstrate view synthesis of propagating light, automatic decomposition of captured measurements into direct and indirect components, as well as novel capabilities such as multi-view time-resolved relighting of captured scenes.

Summary

  • The paper presents a novel time-resolved radiance caching method that decomposes direct and indirect light for precise scene reconstruction.
  • It employs a dual-component model using photometric equations and radiance caching to efficiently integrate multiple light bounces.
  • Experiments on a new multi-view lidar dataset demonstrate significant gains in 3D geometry accuracy and visual fidelity compared to traditional methods.

Neural Inverse Rendering from Propagating Light: A Formal Overview

The paper "Neural Inverse Rendering from Propagating Light" presents a novel approach for physically-based, neural inverse rendering using multi-viewpoint videos acquired from a flash lidar system. The work introduces a time-resolved extension of neural radiance caching to handle the complex light interactions seen in time-resolved lidar measurements. This development marks a significant progression over existing lidar-based rendering systems, enabling more accurate scene modeling and facilitating novel applications such as time-resolved relighting.

Technical Contributions

The authors propose a dual-component model for light transport decomposition: direct light captured directly from a pulsed light source, and indirect light reflecting off surfaces multiple times. The direct component is modeled using conventional photometric equations accounting for time-dependent light falloff. The indirect component utilizes a radiance cache, enabling efficient integration of multiply scattered light without incurring the prohibitive computational expense characteristic of recursive evaluation methods like path tracing.

Key contributions include:

  • Time-Resolved Radiance Caching: This approach extends neural radiance fields to encompass infinite-bounce radiance across temporal domains, allowing robust modeling and inversion of complex indirect light phenomena.
  • Novel Lidar Dataset: A carefully captured multi-viewpoint dataset is provided, which includes precise calibration of light source and camera positions, enabling rigorous testing of the proposed method on real-world measurements.
  • State-of-the-Art 3D Reconstruction: The proposed methodology exhibits impressive performance in reconstructing 3D geometry from lidar data, particularly in the presence of significant indirect lighting effects.

This work challenges previous lidar systems that largely omit indirect light due to its intricate and computationally demanding nature. Here, indirect light is actively exploited as a source of rich information regarding scene properties, which is vital for reconstructing accurate scene geometries.

Results and Findings

The authors validate their framework through a series of simulated and real-world experiments. Results indicate considerable improvements over traditional methods, especially in scenarios where indirect light is prominent, such as specular reflections and diffuse interreflections. Both simulated and real-world datasets demonstrate the model's strengths, with measurable gains in geometry reconstruction accuracy and visual fidelity across novel viewpoints.

Quantitatively, the model exhibits notable peak signal-to-noise ratio (PSNR), lower perceptual loss (LPIPS), and structural similarity index (SSIM) scores when compared to benchmarks such as T-NeRF and FWP++. Furthermore, qualitative evaluations highlight the method's capacity to recover superior normal maps and depth information, particularly in complex lighting scenarios involving multiple reflections.

Implications and Future Directions

This research advances the field of lidar-based rendering by integrating physically-grounded models with neural networks to achieve robust inverse rendering under time-resolved conditions. Applications beyond rendering are foreseeable, including real-time scene understanding and navigation, autonomous vehicle systems that demand high-fidelity environmental reconstruction, and enhanced visualization in augmented or virtual reality settings.

Future work could explore accelerating the computational aspects of radiance caching, potentially via more compact neural representations or specialized hardware adaptation. Additionally, the integration of the proposed methodology with emerging technologies such as dynamic neural radiance fields could further amplify the depth and quality of inverse-rendered scenes.

Overall, this paper furnishes a comprehensive blueprint for leveraging indirect light in lidar systems, expanding the horizon of what is achievable in neural rendering for complex light transports and establishing a new standard for lidar-based scene reconstruction and visualization.