Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AUTO3D: Novel view synthesis through unsupervisely learned variational viewpoint and global 3D representation (2007.06620v2)

Published 13 Jul 2020 in cs.CV and cs.LG

Abstract: This paper targets on learning-based novel view synthesis from a single or limited 2D images without the pose supervision. In the viewer-centered coordinates, we construct an end-to-end trainable conditional variational framework to disentangle the unsupervisely learned relative-pose/rotation and implicit global 3D representation (shape, texture and the origin of viewer-centered coordinates, etc.). The global appearance of the 3D object is given by several appearance-describing images taken from any number of viewpoints. Our spatial correlation module extracts a global 3D representation from the appearance-describing images in a permutation invariant manner. Our system can achieve implicitly 3D understanding without explicitly 3D reconstruction. With an unsupervisely learned viewer-centered relative-pose/rotation code, the decoder can hallucinate the novel view continuously by sampling the relative-pose in a prior distribution. In various applications, we demonstrate that our model can achieve comparable or even better results than pose/3D model-supervised learning-based novel view synthesis (NVS) methods with any number of input views.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiaofeng Liu (124 papers)
  2. Tong Che (26 papers)
  3. Yiqun Lu (1 paper)
  4. Chao Yang (334 papers)
  5. Site Li (15 papers)
  6. Jane You (19 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.