Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LOLNeRF: Learn from One Look (2111.09996v2)

Published 19 Nov 2021 in cs.CV

Abstract: We present a method for learning a generative 3D model based on neural radiance fields, trained solely from data with only single views of each object. While generating realistic images is no longer a difficult task, producing the corresponding 3D structure such that they can be rendered from different views is non-trivial. We show that, unlike existing methods, one does not need multi-view data to achieve this goal. Specifically, we show that by reconstructing many images aligned to an approximate canonical pose with a single network conditioned on a shared latent space, you can learn a space of radiance fields that models shape and appearance for a class of objects. We demonstrate this by training models to reconstruct object categories using datasets that contain only one view of each subject without depth or geometry information. Our experiments show that we achieve state-of-the-art results in novel view synthesis and high-quality results for monocular depth prediction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Daniel Rebain (20 papers)
  2. Mark Matthews (11 papers)
  3. Kwang Moo Yi (68 papers)
  4. Dmitry Lagun (18 papers)
  5. Andrea Tagliasacchi (78 papers)
Citations (107)

Summary

We haven't generated a summary for this paper yet.