Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table (2304.12587v4)

Published 25 Apr 2023 in cs.CV

Abstract: Neural radiance field (NeRF) has shown remarkable performance in generating photo-realistic novel views. Among recent NeRF related research, the approaches that involve the utilization of explicit structures like grids to manage features achieve exceptionally fast training by reducing the complexity of multilayer perceptron (MLP) networks. However, storing features in dense grids demands a substantial amount of memory space, resulting in a notable memory bottleneck within computer system. Consequently, it leads to a significant increase in training times without prior hyper-parameter tuning. To address this issue, in this work, we are the first to propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality. Specifically, we first design a mixed-feature hash encoding to adaptively mix part of multi-level feature grids and map it to a single hash table. Following that, in order to obtain the correct index of a grid point, we further develop an index transformation method that transforms indices of an arbitrary level grid to those of a canonical grid. Extensive experiments benchmarking with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.

Citations (2)

Summary

We haven't generated a summary for this paper yet.