Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Neural Field Training via Soft Mining (2312.00075v1)

Published 29 Nov 2023 in cs.CV

Abstract: We present an approach to accelerate Neural Field training by efficiently selecting sampling locations. While Neural Fields have recently become popular, it is often trained by uniformly sampling the training domain, or through handcrafted heuristics. We show that improved convergence and final training quality can be achieved by a soft mining technique based on importance sampling: rather than either considering or ignoring a pixel completely, we weigh the corresponding loss by a scalar. To implement our idea we use Langevin Monte-Carlo sampling. We show that by doing so, regions with higher error are being selected more frequently, leading to more than 2x improvement in convergence speed. The code and related resources for this study are publicly available at https://ubc-vision.github.io/nf-soft-mining/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shakiba Kheradmand (4 papers)
  2. Daniel Rebain (20 papers)
  3. Gopal Sharma (16 papers)
  4. Hossam Isack (9 papers)
  5. Abhishek Kar (21 papers)
  6. Andrea Tagliasacchi (78 papers)
  7. Kwang Moo Yi (68 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.