Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ArtFormer: Controllable Generation of Diverse 3D Articulated Objects (2412.07237v3)

Published 10 Dec 2024 in cs.CV, cs.AI, and cs.RO

Abstract: This paper presents a novel framework for modeling and conditional generation of 3D articulated objects. Troubled by flexibility-quality tradeoffs, existing methods are often limited to using predefined structures or retrieving shapes from static datasets. To address these challenges, we parameterize an articulated object as a tree of tokens and employ a transformer to generate both the object's high-level geometry code and its kinematic relations. Subsequently, each sub-part's geometry is further decoded using a signed-distance-function (SDF) shape prior, facilitating the synthesis of high-quality 3D shapes. Our approach enables the generation of diverse objects with high-quality geometry and varying number of parts. Comprehensive experiments on conditional generation from text descriptions demonstrate the effectiveness and flexibility of our method.

Summary

  • The paper introduces ArtFormer, a novel framework for controllably generating diverse 3D articulated objects from text or images by representing objects as tree structures and predicting sequential tokens.
  • ArtFormer utilizes a diverse shape prior trained with diffusion and codebooks for part geometry generation and an Articulation Transformer with a Tree Position Embedding for modeling structure and kinematics.
  • Experiments show ArtFormer generates higher quality geometry, greater diversity, and better text alignment than baselines, demonstrating the ability to create novel shapes despite dataset limitations.

This paper introduces ArtFormer, a novel framework for the controllable generation of diverse 3D articulated objects from text or image descriptions. The key challenge addressed is simultaneously generating high-quality geometry for individual parts and accurate kinematic relationships between them, which existing methods struggle with, often relying on predefined structures or dataset retrieval.

The ArtFormer framework addresses these limitations by representing an articulated object as a tree structure, where each node corresponds to a sub-part. Each node/token contains attributes defining the sub-part's geometry (bounding box bib_i, geometry latent code ziz_i) and its kinematic relation to its parent (joint axis jij_i, joint limits lil_i). This parameterization converts the problem of generating an articulated object into generating a sequence of tokens representing this tree.

The core of ArtFormer consists of two main components:

  1. Diverse and Controllable Shape Prior: Instead of directly generating high-dimensional geometry, ArtFormer generates a compact latent code ziz_i for each sub-part. This latent code is then decoded by a Signed Distance Function (SDF) shape prior. The shape prior is learned via a VAE encoder (q(zf)q(z|f)) and decoder (p(fz)p(f|z) combined with a generalizable SDF network (Ω(f,x)\Omega(f, x)) trained on point clouds. To enable controllable and diverse geometry generation, a conditional diffusion model ϵ(zt,t,c^g,cs)\epsilon(z_t, t, \hat{c}_g, c_s) is trained on the latent space p(z)p(z). This diffusion model is conditioned on geometry features cg=Eg(z)c_g = \mathcal{E}_g(z) and semantic information cs=Es(name)c_s = \mathcal{E}_s(name). A key aspect for diversity is discretizing the geometry condition cgc_g using codebooks and sampling via Gumbel-Softmax during training, allowing the model to predict logits PiP_i for sampling from codebooks during inference, rather than predicting the high-dimensional zz directly. This approach ensures geometry quality while promoting diversity.
  2. Articulation Transformer: A transformer architecture is employed to generate the sequence of tokens representing the articulated object tree. Each token comprises the parent index fai\text{fa}_i and concatenated attributes [bi,zi,ji,li][b_i, z_i, j_i, l_i], processed by an MLP tokenizer. To effectively model the tree structure, a novel Tree Position Embedding (TPE) is introduced, based on processing paths from the root to each node using a GRU and concatenating absolute position encodings. Conditional generation (primarily text-guided) is incorporated using cross-attention layers, where conditioning tokens (e.g., from a pre-trained text encoder like T5) are fed into the transformer. The generation process uses an iterative decoding procedure, starting from a special start token S\mathcal{S} and predicting child nodes for existing nodes in each step until terminal tokens T\mathcal{T} are outputted for all nodes. This autoregressive process helps capture inter-dependencies between parts. The transformer is trained with binary cross-entropy for the terminal token prediction (LoL_o) and MSE for the attributes (LaL_a), along with a KL divergence loss (LPL_P) for the codebook logits to align with the distributions derived from the shape prior's latent codes.

For practical implementation, the shape prior is trained first on datasets like PartNet and PartNet-Mobility. Then, the Articulation Transformer is trained on PartNet-Mobility, using text descriptions generated by GPT-4o from object snapshots. The geometry latent codes zz from the pre-trained shape prior are used to supervise the attribute prediction, and the derived codebook distributions supervise the logits PP.

Experiments demonstrate ArtFormer's effectiveness compared to baselines like NAP and CAGE. ArtFormer, which directly generates geometry, significantly outperforms NAP variants in metrics like Minimum Matching Distance (MMD), Coverage (COV), and Part Overlapping Ratio (POR), indicating higher quality geometry and more plausible kinematics. Compared to CAGE (which retrieves parts), ArtFormer shows better COV and 1-NNA, suggesting superior diversity and distribution coverage, while CAGE has better MMD and POR. Human studies further support ArtFormer's ability to generate more diverse objects and better align with text instructions. The ability to generate novel shapes is shown by analyzing the Chamfer Distance between generated parts and training set parts. The framework's flexibility is validated through image-guided generation by replacing the text encoder with an image encoder (BLIP-2). Ablation studies highlight the critical contributions of both the Tree Position Embedding and the Shape Prior to the performance.

While successful, the paper notes several limitations, including the dependence on limited datasets, which restricts the diversity of object types and number of parts that can be generated. Future work could explore larger-scale datasets, incorporate richer multi-modal inputs beyond text and images (e.g., point clouds or target joint structures), and improve the modeling of complex articulation details specified in conditions.