Explicit Mesh Reconstruction of Transparent Surfaces

Develop an explicit mesh reconstruction method for transparent surfaces in object-level 3D reconstruction pipelines, addressing cases where density-based neural pre-training can model transparency but subsequent mesh extraction fails, in order to enable accurate geometry for transparent objects.

Background

ReLi3D is a feed-forward multi-view reconstruction system that outputs meshes with spatially varying PBR materials and coherent HDR environment maps. While its density-based NeRF pre-training can represent transparency, the final pipeline produces explicit meshes via Flexicubes and texture baking, which is not suited to reconstructing transparent surfaces.

Transparent materials pose unique challenges due to refraction and complex light transport, which are difficult to capture with surface-only mesh models in a general-purpose, fast feed-forward pipeline. The authors explicitly note that turning transparent objects into explicit meshes remains outside the scope and is still an open research challenge.

References

Transparent objects present another limitation: while our density-based NeRF pre-training handles transparency, explicit mesh reconstruction of transparent surfaces remains an open research challenge outside our current scope.

ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination  (2603.19753 - Dihlmann et al., 20 Mar 2026) in Appendix, Section "Limitations and Failure Cases"