Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Infer and Execute 3D Shape Programs (1901.02875v3)

Published 9 Jan 2019 in cs.CV, cs.AI, cs.GR, and cs.LG

Abstract: Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts. In contrast, recent advances in 3D shape sensing focus more on low-level geometry but less on these higher-level relationships. In this paper, we propose 3D shape programs, integrating bottom-up recognition systems with top-down, symbolic program structure to capture both low-level geometry and high-level structural priors for 3D shapes. Because there are no annotations of shape programs for real shapes, we develop neural modules that not only learn to infer 3D shape programs from raw, unannotated shapes, but also to execute these programs for shape reconstruction. After initial bootstrapping, our end-to-end differentiable model learns 3D shape programs by reconstructing shapes in a self-supervised manner. Experiments demonstrate that our model accurately infers and executes 3D shape programs for highly complex shapes from various categories. It can also be integrated with an image-to-shape module to infer 3D shape programs directly from an RGB image, leading to 3D shape reconstructions that are both more accurate and more physically plausible.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yonglong Tian (32 papers)
  2. Andrew Luo (8 papers)
  3. Xingyuan Sun (11 papers)
  4. Kevin Ellis (31 papers)
  5. William T. Freeman (114 papers)
  6. Joshua B. Tenenbaum (257 papers)
  7. Jiajun Wu (249 papers)
Citations (139)

Summary

We haven't generated a summary for this paper yet.