Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion (2312.03869v1)

Published 6 Dec 2023 in cs.CV

Abstract: This paper presents a novel approach to inpainting 3D regions of a scene, given masked multi-view images, by distilling a 2D diffusion model into a learned 3D scene representation (e.g. a NeRF). Unlike 3D generative methods that explicitly condition the diffusion model on camera pose or multi-view information, our diffusion model is conditioned only on a single masked 2D image. Nevertheless, we show that this 2D diffusion model can still serve as a generative prior in a 3D multi-view reconstruction problem where we optimize a NeRF using a combination of score distillation sampling and NeRF reconstruction losses. Predicted depth is used as additional supervision to encourage accurate geometry. We compare our approach to 3D inpainting methods that focus on object removal. Because our method can generate content to fill any 3D masked region, we additionally demonstrate 3D object completion, 3D object replacement, and 3D scene completion.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kira Prabhu (2 papers)
  2. Jane Wu (10 papers)
  3. Lynn Tsai (5 papers)
  4. Peter Hedman (21 papers)
  5. Dan B Goldman (15 papers)
  6. Ben Poole (46 papers)
  7. Michael Broxton (5 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.