Interpretability of neural radiance fields encoded in network weights
Develop principled methods to analyze and interpret neural radiance fields—continuous 5D functions represented by multilayer perceptrons that map spatial position and viewing direction to volume density and view-dependent emitted radiance—in order to reason about the expected quality of rendered views and identify failure modes, given that scene content is encoded in network weights rather than explicit sampled representations such as voxel grids or meshes.
References
Another direction for future work is interpretability: sampled representations such as voxel grids and meshes admit reasoning about the expected quality of rendered views and failure modes, but it is unclear how to analyze these issues when we encode scenes in the weights of a deep neural network.