Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
60 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation (2212.04493v2)

Published 8 Dec 2022 in cs.CV and cs.LG

Abstract: In this work, we present a novel framework built to simplify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, including images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multi-modal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.

Citations (201)

Summary

  • The paper introduces SDFusion, a diffusion-based model that leverages signed distance functions for accurate 3D shape generation.
  • It outperforms previous models on shape completion and reconstruction, achieving improved metrics such as lower UHD and Chamfer Distance.
  • It demonstrates strong text-guided 3D generation with a 49% preference rate over AutoSDF, highlighting its practical integration of multimodal inputs.

An Analysis of SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation

The presented paper introduces SDFusion, a novel framework aimed at democratizing the process of 3D asset generation, particularly targeting users with limited expertise in 3D design. Through the integration of multimodal inputs—such as images, text, and partially observed shapes—SDFusion facilitates an interactive generation process. This paper details the architecture of the model, discusses its capabilities, presents empirical results, and explores prospects for future developments in the domain of AI-driven 3D asset generation.

At the core, SDFusion employs a diffusion-based generative model, leveraging signed distance functions (SDFs) as a compact yet expressive representation for 3D shapes. The model architecture consists of an encoder-decoder setup that learns a latent representation of 3D shapes upon which the diffusion process operates. A notable innovation in this work is the capability to seamlessly integrate multiple input modalities, supported by task-specific encoders and a cross-attention mechanism, allowing the system to account for varying strengths of input conditions.

The empirical results presented in the paper indicate that SDFusion excels in multiple aspects of 3D shape manipulation. It outperforms prior models on shape completion tasks as demonstrated on complex datasets such as ShapeNet and BuildingNet, achieving notable improvements in diversity—measured via Total Mutual Difference (TMD)—and fidelity, evidenced by lower Unidirectional Hausdorff Distances (UHD). Specifically, the ability of SDFusion to produce high-resolution outputs (up to 1283128^3 resolution) while maintaining efficiency is significant given the computational demands typically associated with 3D processing.

For single-view 3D reconstruction, SDFusion demonstrates superiority by employing visual encoders aligned with CLIP for contextual understanding, establishing itself as a formidable approach on the Pix3D dataset against benchmarks like Pix2Vox and AutoSDF, improving metrics such as Chamfer Distance and F-Score.

The paper also tackles text-guided 3D generation, wherein SDFusion leverages pre-trained models such as BERT for understanding textual conditions. It achieves remarkable results, establishing a 49% preference rate over AutoSDF when assessed using a neural evaluator, highlighting its strength in aligning generated shapes with natural language descriptions.

In terms of novel contributions, the paper articulates potential applications of integrating SDFusion with pretrained 2D models. By using techniques like score distillation sampling, the authors demonstrate effective texture generation through neural rendering, thus addressing a crucial aspect—constructing visually realistic and diverse 3D objects with detailed textures.

Despite its achievements, the paper does acknowledge existing limitations, notably the restrained scope of SDFusion to process high-quality SDF representations. Looking ahead, the authors suggest potential research directions, including exploring support for multiple types of 3D representations, addressing the challenges of generating entire scenes instead of isolated objects, and further synergies between 2D and 3D machine learning models.

In conclusion, this work offers a comprehensive framework for interactive 3D content creation, showcasing advances in leveraging multimodal inputs to simplify complex 3D operations for novice users. The fusion of technologies encapsulated in SDFusion holds significant promise for both practical applications and theoretical advancements, potentially revolutionizing the accessibility and versatility of 3D asset generation.

X Twitter Logo Streamline Icon: https://streamlinehq.com