Papers
Topics
Authors
Recent
Search
2000 character limit reached

MeshCraft: Exploring Efficient and Controllable Mesh Generation with Flow-based DiTs

Published 29 Mar 2025 in cs.CV | (2503.23022v1)

Abstract: In the domain of 3D content creation, achieving optimal mesh topology through AI models has long been a pursuit for 3D artists. Previous methods, such as MeshGPT, have explored the generation of ready-to-use 3D objects via mesh auto-regressive techniques. While these methods produce visually impressive results, their reliance on token-by-token predictions in the auto-regressive process leads to several significant limitations. These include extremely slow generation speeds and an uncontrollable number of mesh faces. In this paper, we introduce MeshCraft, a novel framework for efficient and controllable mesh generation, which leverages continuous spatial diffusion to generate discrete triangle faces. Specifically, MeshCraft consists of two core components: 1) a transformer-based VAE that encodes raw meshes into continuous face-level tokens and decodes them back to the original meshes, and 2) a flow-based diffusion transformer conditioned on the number of faces, enabling the generation of high-quality 3D meshes with a predefined number of faces. By utilizing the diffusion model for the simultaneous generation of the entire mesh topology, MeshCraft achieves high-fidelity mesh generation at significantly faster speeds compared to auto-regressive methods. Specifically, MeshCraft can generate an 800-face mesh in just 3.2 seconds (35$\times$ faster than existing baselines). Extensive experiments demonstrate that MeshCraft outperforms state-of-the-art techniques in both qualitative and quantitative evaluations on ShapeNet dataset and demonstrates superior performance on Objaverse dataset. Moreover, it integrates seamlessly with existing conditional guidance strategies, showcasing its potential to relieve artists from the time-consuming manual work involved in mesh creation.

Summary

  • The paper introduces MeshCraft, a framework combining a transformer-based VAE with a flow-based diffusion transformer to generate 3D meshes efficiently with controllable face counts.
  • MeshCraft achieves significantly faster mesh generation speeds, generating an 800-face mesh 35 times quicker than previous methods like MeshGPT, while maintaining high reconstruction quality (99.42% triangle accuracy on ShapeNet).
  • This approach offers practical benefits for 3D artists by reducing manual workload through efficient, high-quality mesh generation with user-defined attributes, supporting rapid content creation in professional workflows.

Analysis of MeshCraft: Efficient and Controllable Mesh Generation Framework

The paper presents MeshCraft, a novel approach to address the inefficiencies and lack of control in existing methods for generating 3D meshes. This framework aims to optimize the mesh generation process using flow-based diffusion transformers, which mark an advancement over traditional auto-regressive techniques that suffer from slow generation speeds and uncontrolled mesh face counts. The study is situated within the context of increasing demand for high-quality 3D content creation, especially relevant in fields such as gaming and 3D printing.

Core Contributions and Methodology

MeshCraft's innovation lies in its architecture, which consists of two primary components: a transformer-based Variational Auto-Encoder (VAE) for encoding and decoding meshes, and a flow-based diffusion transformer for generating meshes with a specified number of faces.

  1. Transformer-based VAE: This component encodes raw meshes into continuous face-level tokens and decodes them back. Unlike existing vector quantization methods, MeshCraft uses continuous tokens, which not only increase reconstruction performance but also reduce the token length significantly, thereby accelerating the mesh generation process.
  2. Flow-based Diffusion Transformer: This section of the framework is conditioned on the number of faces in the mesh, allowing for controlled generation. By leveraging diffusion models, MeshCraft achieves simultaneous generation of entire mesh topologies, resulting in significantly faster generations. For instance, it generates an 800-face mesh in just 3.2 seconds, which is 35 times faster than previous methods like MeshGPT.

Experimental Validation and Results

The performance of MeshCraft is validated through extensive experiments conducted on the ShapeNet and Objaverse datasets. It demonstrates superior capabilities both qualitatively and quantitatively compared to state-of-the-art techniques.

  • Reconstruction Quality: On the ShapeNet dataset, the VAE component of MeshCraft achieves a triangle accuracy of 99.42% with a significantly lower L2 distance of 0.06, which shows improvement over existing methods utilizing vector quantization.
  • Generation Performance: MeshCraft outperforms current methods in several metrics, including Coverage (COV), Minimum Matching Distance (MMD), 1-Nearest-Neighbor Accuracy (1-NNA), Jensen-Shannon Divergence (JSD), Fréchet Inception Distance (FID), and Kernel Inception Distance (KID), underlining its effectiveness in producing high-quality mesh outputs.

Implications and Future Directions

The ability of MeshCraft to efficiently generate high-fidelity meshes with user-defined attributes (e.g., number of faces) introduces new possibilities for automation in 3D modeling. Practically, it reduces the manual workload for 3D artists while maintaining control over the mesh attributes, aligning with industry needs for customizable and rapid content creation tools.

Theoretically, MeshCraft's use of continuous latent spaces and diffusion models provides a robust methodological alternative to predominately discrete auto-regressive approaches. This work suggests potential extensions, such as integrating MeshCraft with broader AI-driven design tools and expanding its conditional generation capabilities. Furthermore, future research could explore enhancing the extrapolation capabilities and robustness of the model, particularly in synthesizing objects with previously unseen attributes.

Overall, this paper contributes significantly to the field of automated 3D mesh generation, presenting a framework that effectively balances speed, quality, and control, which are critical factors for its adoption in professional 3D modeling workflows.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 127 likes about this paper.