Papers
Topics
Authors
Recent
Search
2000 character limit reached

Diffusion-SDF: Implicit 3D Modeling

Updated 7 March 2026
  • Diffusion-SDF is a generative framework that applies diffusion processes to signed distance fields for creating implicit, high-fidelity 3D representations.
  • It employs forward noising and reverse denoising with neural networks and conditioning methods, such as cross-attention, to enable tasks like text-to-shape synthesis and shape completion.
  • Diffusion-SDF models outperform traditional voxel and mesh-based methods, offering superior shape quality, diversity, and controllable generation across various applications.

Diffusion-SDF refers to a class of generative models that leverage diffusion processes to generate or reconstruct signed distance fields (SDFs) for shapes, typically in 3D or 2D geometric modeling tasks. The approach integrates implicit surface representations, powerful probabilistic diffusion mechanisms, and, in advanced systems, conditioning methods such as text or image features, to achieve high-quality and diverse 3D content synthesis. Diffusion-SDF frameworks have been applied to text-to-shape, shape completion, sketch-conditioned modeling, anatomical structure synthesis, and polygon reconstruction, exhibiting advantages over traditional voxel-based, point cloud, or mesh generative methods (Li et al., 2022, Chou et al., 2022, Guo et al., 10 Mar 2025, Zheng et al., 2023, Moorthy et al., 2024).

1. SDF-Based Generative Modeling: Foundations

Signed distance fields (SDFs) define a scalar field ϕ:Rn→R\phi:\mathbb{R}^n\to\mathbb{R} (typically n=2n=2 or $3$) such that ϕ(x)=0\phi(x) = 0 represents the object surface, ϕ(x)>0\phi(x) > 0 is outside, and ϕ(x)<0\phi(x) < 0 is inside the object. SDF representations are implicit, yielding watertight and arbitrarily-resolution meshes via iso-surfacing algorithms such as Marching Cubes, and are effective for encoding complex topologies and fine details.

Diffusion-SDF approaches utilize SDFs as the fundamental generative object: rather than generating explicit geometry, a diffusion process acts on either the SDF volume, its neural network weights, or lower-dimensional latent representations encoded from SDFs. This formulation supports both unconditional generation and conditioned synthesis (e.g., by text, sketches, partial observations, or combinatorial graphs), in contrast to classic explicit 3D representations such as occupancy grids or point clouds (Li et al., 2022, Chou et al., 2022, Guo et al., 10 Mar 2025, Zheng et al., 2023, Moorthy et al., 2024).

2. Diffusion Processes on SDFs and Latents

Diffusion-SDF models adopt a forward noising (diffusion) process and a learned reverse (denoising) process. The clean SDF (or its latent code) is gradually corrupted via additive Gaussian noise over TT steps:

  • Forward: zt=αˉtz0+1−αˉtϵz_t = \sqrt{\bar\alpha_t}z_0 + \sqrt{1-\bar\alpha_t}\epsilon, where z0z_0 is the clean latent (or SDF grid) and ϵ∼N(0,I)\epsilon\sim\mathcal{N}(0,I).
  • Reverse: A neural network ϵθ(zt,t,cond)\epsilon_\theta(z_t, t, \text{cond}) predicts the noise or clean signal, which is then used to compute the mean of the reverse conditional and sample zt−1z_{t-1}.

Variants include:

Training minimizes mean-squared error in noise prediction or signal prediction (L2 loss), as in conventional Denoising Diffusion Probabilistic Models (DDPMs).

3. Conditioning Mechanisms and Architectural Innovations

Conditioning is central in Diffusion-SDF, supporting tasks such as text-to-shape, image/sketch-conditioned modeling, shape completion, or controllable anatomy generation.

Conditioning Techniques

Architectural Modules

  • UinU-Net: A two-level 3D U-Net where an inner network performs per-patch processing (1×1×1 convolutional blocks and transformers) at the bottleneck, merged with an outer global U-Net, yielding patch-level independence and global coherence (Li et al., 2022).
  • View-aware local attention: For sketch conditioning, maps 2D patch features from a ViT encoder to 3D voxel neighborhoods via cross-attention, enforcing spatial consistency (Zheng et al., 2023).
  • Octree U-Nets and refiners: Efficiently process high-resolution or domain-adapted SDFs via sparse convolutions and residual connections (Zheng et al., 2023, Guo et al., 10 Mar 2025).

4. Task-Specific Implementations and Methodologies

Text-to-Shape and Text-Conditioned Generation

Diffusion-SDF (Li et al., 2022) encodes input captions via a frozen CLIP text encoder, injects embeddings into a Voxelized Diffusion UinU-Net, and leverages classifier-free guidance for flexible text-shape tradeoffs. The patch-wise SDF autoencoder splits the SDF grid into independent latents, enabling scalable, localized generation and efficient diffusion on coarse representations.

Shape Completion and Manipulation

Completion is performed by fusing known partial regions (masks) at each denoising step, while manipulation applies cycle sampling: forward-diffuse an initial latent, then denoise with a new condition to blend original and edited features (Li et al., 2022).

Sketch/Image-Conditioned 3D Generation

Locally Attentional SDF Diffusion (LAS-Diffusion) (Zheng et al., 2023) uses a two-stage pipeline with sketch-view-aware attention to enhance spatially-local control, generating high-fidelity shapes from 2D sketches with occupancy and SDF fields reconstructed in sequence.

Anatomical and Controllable Structure Synthesis

CAFusion (Guo et al., 10 Mar 2025) parameterizes morphology (size, shape code, position) via SDF analytic transformations and employs loss-guided diffusion both for anatomy and texture, providing control over both macro- and micro-structure (e.g., curvature index). The two-stage setup separately diffuses shape and textural CT intensity, using specialized guidance and repainting to incorporate background context.

Polygonal Generation from Graphs

VisDiff (Moorthy et al., 2024) demonstrates SDF-guided combinatorial structure generation, inferring the SDF of a 2D polygon (conditioned on a visibility graph via a code encoder and cross-attention), then extracting ordered polygon vertices from the learned SDF. This approach outperforms direct vertex-generation and baseline methods on visibility reconstruction.

5. Quantitative Evaluation and Empirical Outcomes

Diffusion-SDF variants have shown superior performance in shape quality, diversity, and conditional fidelity.

  • Text-to-shape (Diffusion-SDF (Li et al., 2022)): Attained highest IoU, semantic accuracy, and CLIP-S (text-shape alignment) over baselines, with lower TMD (greater sample diversity).
  • Shape completion (Chou et al., 2022, Li et al., 2022): Outperformed ShapeGAN, PVD, and ShapeFormer in minimum matching distance (MMD) and diversity metrics.
  • Sketch-conditioned modeling (Zheng et al., 2023): Achieved higher CLIPScore and lower Chamfer/EMD errors on IKEA and ProSketch-3D Chairs compared to Sketch2Model/Sketch2Mesh; FID scores lower (better) than GAN and previous DFMs.
  • Anatomical synthesis (Guo et al., 10 Mar 2025): Synthetic lymph node data improved down-stream segmentation (Dice coefficient ↑6.45pp); visual Turing tests showed ~60% real/synthetic distinction ability in radiologists.
  • Polygon structure (Moorthy et al., 2024): Visibility reconstruction F1=0.80 (vs. 0.66 baseline), strong OOD generalization, and effective sampling for combinatorial characterization.
Paper Task / Domain Key Metrics & Results
(Li et al., 2022) Text-to-shape 3D gen. Best IoU/Acc/CLIP-S, high diversity (low TMD)
(Chou et al., 2022) Shape completion, recon 10% better MMD/COV than ShapeGAN/PVD
(Zheng et al., 2023) Sketch-conditioned 3D CLIPScore 96.9, FID 17.3–44.9 outperforms SOTA
(Guo et al., 10 Mar 2025) Medical anatomy synth Dice +6.45pp, high realism, >50% COV
(Moorthy et al., 2024) Polygon from graph F1=0.80, OOD F1 up to 0.935, SOTA on graphs

6. Advantages, Limitations, and Future Outlook

Advantages

  • Watertight SDFs enable faithful, surface-consistent, and resolution-agnostic output (Li et al., 2022, Chou et al., 2022).
  • Probabilistic diffusion sampling provides multimodal, diverse, and robust generation.
  • Patch- and latent-based splits (e.g., VAE + UinU-Net, refiner modules) reduce memory and scale to higher resolutions.
  • Conditioning (cross-attention, loss-based) supports complex tasks: text/image guidance, shape editing, controllable anatomy, and combinatorial constraints.
  • Modular design allows extension to completion, manipulation, and medical domains.

Limitations

  • Training and inference are computationally costly, especially at high resolutions (e.g., 64³, 128³) (Li et al., 2022, Chou et al., 2022).
  • Most empirical studies are on restricted domains (e.g., Chairs/Tables, lymph node templates); generalization to highly diverse categories or full scenes needs exploration.
  • Trade-offs persist between patch/latent granularity and recovery of fine local details.
  • Diffusion step count (T=1000) and VAE latent structure limit speed; research into DDIM, learned samplers, or partial-to-complete mappings is ongoing (Chou et al., 2022, Li et al., 2022).
  • Zero-shot generalization and simultaneous geometry-appearance modeling are open problems (Li et al., 2022, Chou et al., 2022).

7. Context Within the Generative Modeling Landscape

Diffusion-SDF unites implicit representation learning with advances in generative diffusion modeling, distinguishing itself from mesh or point cloud GANs by providing:

  • Implicit, surface-agnostic geometry.
  • Multimodal and conditionally controllable generative processes.
  • Applicability from shape design to biomedical imaging, scene understanding, and geometric combinatorics. Current research directions include accelerating generation, extending to richer conditioning modalities (multi-view, measurements, text), and joint modeling of shape and appearance.

Diffusion-SDF frameworks represent the synthesis of regularized, high-fidelity, and controllable shape modeling. Through diffusion processes over SDFs or SDF-derived latents, these methods enable diverse applications in 3D content creation, completion, structure extraction, and medical image synthesis. Key advances include architectural innovations (UinU-Net, view-local attention), guidance mechanisms, and quantitative superiority over previous generative approaches. Major open challenges involve scalability, broader domain generalization, and efficient conditional inference (Li et al., 2022, Chou et al., 2022, Guo et al., 10 Mar 2025, Moorthy et al., 2024, Zheng et al., 2023).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Diffusion-SDF.