Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation (2211.09869v4)

Published 17 Nov 2022 in cs.CV and cs.LG

Abstract: Diffusion models currently achieve state-of-the-art performance for both conditional and unconditional image generation. However, so far, image diffusion models do not support tasks required for 3D understanding, such as view-consistent 3D generation or single-view object reconstruction. In this paper, we present RenderDiffusion, the first diffusion model for 3D generation and inference, trained using only monocular 2D supervision. Central to our method is a novel image denoising architecture that generates and renders an intermediate three-dimensional representation of a scene in each denoising step. This enforces a strong inductive structure within the diffusion process, providing a 3D consistent representation while only requiring 2D supervision. The resulting 3D representation can be rendered from any view. We evaluate RenderDiffusion on FFHQ, AFHQ, ShapeNet and CLEVR datasets, showing competitive performance for generation of 3D scenes and inference of 3D scenes from 2D images. Additionally, our diffusion-based approach allows us to use 2D inpainting to edit 3D scenes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zexiang Xu (56 papers)
  2. Matthew Fisher (50 papers)
  3. Paul Henderson (37 papers)
  4. Hakan Bilen (62 papers)
  5. Niloy J. Mitra (83 papers)
  6. Paul Guerrero (46 papers)
  7. Titas Anciukevičius (3 papers)
Citations (132)

Summary

We haven't generated a summary for this paper yet.