Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Groundplans: Persistent Neural Scene Representations from a Single Image (2207.11232v2)

Published 22 Jul 2022 in cs.CV, cs.AI, cs.GR, and cs.LG

Abstract: We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene. Motivated by the bird's-eye-view (BEV) representation commonly used in vision and robotics, we propose conditional neural groundplans, ground-aligned 2D feature grids, as persistent and memory-efficient scene representations. Our method is trained self-supervised from unlabeled multi-view observations using differentiable rendering, and learns to complete geometry and appearance of occluded regions. In addition, we show that we can leverage multi-view videos at training time to learn to separately reconstruct static and movable components of the scene from a single image at test time. The ability to separately reconstruct movable objects enables a variety of downstream tasks using simple heuristics, such as extraction of object-centric 3D representations, novel view synthesis, instance-level segmentation, 3D bounding box prediction, and scene editing. This highlights the value of neural groundplans as a backbone for efficient 3D scene understanding models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Prafull Sharma (8 papers)
  2. Ayush Tewari (43 papers)
  3. Yilun Du (113 papers)
  4. Sergey Zakharov (34 papers)
  5. Rares Ambrus (53 papers)
  6. Adrien Gaidon (84 papers)
  7. William T. Freeman (114 papers)
  8. Joshua B. Tenenbaum (257 papers)
  9. Vincent Sitzmann (38 papers)
  10. Fredo Durand (39 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.