Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 81 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 195 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Unsigned Distance Fields (UDFs)

Updated 23 September 2025
  • UDFs are non-negative scalar fields that map every point in ℝ³ to its minimum distance from a surface, effectively handling open or arbitrary-topology shapes.
  • They lack sign information, which creates challenges in gradient stability and precise surface extraction, especially within neural network frameworks.
  • Recent advances combine specialized loss formulations and neural architectures to overcome extraction ambiguities and enhance 3D reconstruction quality.

Unsigned Distance Fields (UDFs) are non-negative scalar fields defined in ℝ³, where the value at every spatial location corresponds to the Euclidean distance to the closest point on an underlying surface. Critically, unlike signed distance functions (SDFs) that encode both the distance and orientation (sign) relative to a closed manifold, UDFs lack a sign convention and are thus intrinsically suited for representing open, non-watertight, or arbitrary-topology surfaces. The widespread adoption of UDFs in geometric learning arises from their topological flexibility—handling open boundaries, layered structures, or non-orientable surfaces—albeit with distinctive challenges in learning, differentiability, and surface extraction, especially in neural implicit representation contexts.

1. Definition, Mathematical Structure, and Representational Scope

UDFs realize the function fU:R3R0f_U:\mathbb{R}^3\rightarrow \mathbb{R}_{\geq 0} where fU(x)=minySxyf_U(x) = \min_{y\in S}\|x-y\| with SS the surface. The zero level set S0={xfU(x)=0}S_0 = \{x\mid f_U(x) = 0\} implicitly defines the surface, allowing for the representation of both manifold and non-manifold, watertight and non-watertight configurations (Guillard et al., 2021, Long et al., 2022, Chen et al., 3 Jun 2025). UDFs explicitly discard the interior-exterior designation crucial to SDFs, enabling applications in scenarios with incomplete, open, or multiply-connected surface geometry, such as garment digitization, anatomical modeling, or objects with holes and boundaries.

Key UDF variants and extensions include:

  • Orthogonal UDFs (UODFs): Directionally restricted UDFs with evaluation along orthogonal rays (LR, FB, UD), improving interpolation error resistance and surface point localization (Lu et al., 3 Mar 2024).
  • Gradient Distance Function (GDF): Vector-valued function v(x)=x^xv(x)=\hat{x}-x (with x^\hat{x} the closest point on the surface), yielding both distance (v(x)\|v(x)\|) and direction (normalized vector), and enabling differentiability at the surface (Le et al., 29 Oct 2024).

2. Distinctive Challenges: Differentiability and Ambiguity at the Zero Level

Several inherent properties differentiate UDFs from SDFs and influence their practical viability in learning and geometry extraction:

  1. Zero Level Set Non-differentiability: By construction, UDFs are non-differentiable at the zero level set (the surface), as the distance function exhibits a cusp. This leads to ill-posed gradients at precisely the surface location, destabilizing neural training, impeding surface normal estimation, and producing fragmented or discontinuous reconstructions (Zhou et al., 2023, Fainstein et al., 14 Feb 2024, Xu et al., 1 Jun 2024).
  2. Absence of Sign Change: UDFs provide no binary indicator of "side" (inside or outside) relative to the surface, eliminating the zero-crossing heuristic used in most classical surface extraction (e.g., Marching Cubes, Dual Contouring) and leading to ambiguities in mesh placement, especially in the presence of neural noise (Guillard et al., 2021, Zhang et al., 2023, Stella et al., 25 Jul 2024).
  3. Surface Localization Ambiguity: Prediction noise or non-zero minima near the surface further complicate the accurate localization and extraction of the true zero level set, often resulting in artifacts, holes, or topological defects in meshed reconstructions (Zhang et al., 2023, Hou et al., 2023, Chen et al., 30 Aug 2024).

3. Neural UDF Learning: Architectures, Constraints, and Losses

Learning UDFs with neural networks (commonly MLPs or sinusoidal-activated SIREN networks) has been central to recent advances, spanning point cloud fitting (Ren et al., 2022, Hu et al., 1 Jul 2024), differentiable rendering from images (Long et al., 2022, Liu et al., 2023, Deng et al., 2023), and generative modeling (Zhou et al., 10 Apr 2024).

Salient methodologies:

  • Loss formulation: Learning typically combines distance supervision (e.g., Ldist=pif(pi)\mathcal{L}_\text{dist} = \sum_{p_i} |f(p_i)| for sampled surface points), Eikonal regularization (f=1\|\nabla f\|=1, adaptively weighted to avoid vanishing gradients near the surface (Xu et al., 1 Jun 2024)), and additional normal or curvature alignment losses.
  • Handling non-negativity: Rather than imposing a hard-positive constraint (which can create wide dead zones or local minima), recent methods advocate unconditioned MLP output with soft positivity constraints (e.g., Lpositive=xexp(100f(x))\mathcal{L}_\text{positive} = \sum_x \exp(-100f(x))), allowing the field to assume small negative values if needed for stability and improved minima localization (Xu et al., 1 Jun 2024).
  • Gradient and normal alignment: Training may employ normal alignment terms between estimated and ground-truth normals (where available) or induce normals from local quadratic approximations (e.g., in point clouds via PCA or geometric upsampling (Ren et al., 2022, Hu et al., 1 Jul 2024)).
  • Level set projection and smoothing: Level set projection pulls non-zero level sets onto the zero level set, with loss terms enforcing gradient parallelism at the projection points, mitigating discontinuity and non-differentiability at the surface (Zhou et al., 2023).

4. Surface Extraction and Meshing from UDFs

Translating neural UDFs into explicit surface representations is nontrivial due to the lack of sign flips. The field has produced a series of algorithmic innovations:

Approach Principle Key Features
MeshUDF (Guillard et al., 2021) Pseudo-sign via gradient voting Fast, differentiable, requires robust gradient estimation
GeoUDF (Ren et al., 2022) Geometry-guided UDF & gradients Affine averaging of distances to tangent planes, edge-based marching cubes
DoubleCoverUDF (Hou et al., 2023) Extract rr-level iso-surface, project/double-cover Guarantees orientable manifold, resolves double layers, topology preservation via minimum cut
DCUDF/DCUDF2 (Chen et al., 30 Aug 2024) Energy-minimized projection with accuracy-aware weights; topology correction Selective subdivision, activation masks, iteratively refine and fill topological defects
DualMesh-UDF (Zhang et al., 2023) Tangent plane QEF minimization in octree Robust in noisy neural fields, local cluster-based linear systems
MIND (Chen et al., 3 Jun 2025) Material interface from multi-labeled region partitioning Supports non-manifold, multi-phase interfaces, multi-label Marching Cubes
Neural Surface Detection (Stella et al., 25 Jul 2024) Deep MLP maps UDF+gradients to pseudo-sign configuration Local, parallelizable, bridges to traditional MC/DC algorithms

Recent meshing approaches may leverage adaptive octrees (Zhang et al., 2023), multi-distance or gradient-aware field partitioning (Chen et al., 3 Jun 2025), or iterative correction via spatial message passing (Stella et al., 21 Sep 2025). Some algorithms, such as DCUDF2, introduce self-adaptive weighting and dynamic topology correction to prevent over-smoothing and ensure geometric and topological consistency (Chen et al., 30 Aug 2024).

5. Advancements in Neural Rendering and Generative Modeling

UDFs, by supporting arbitrary topology and open boundaries, are increasingly integrated into neural rendering frameworks:

  • NeuralUDF (Long et al., 2022), NeUDF (Liu et al., 2023), 2S-UDF (Deng et al., 2023): These leverage UDFs as implicit fields underlying multi-view reconstruction, replacing SDFs to enable reconstruction of objects with boundaries or complex topologies. Density-to-weight function design is crucial to guarantee unbiased appearance, occlusion sensitivity, and numeric stability in rendering; methods include custom piecewise functions, learnable density-to-weight decoupling, or probabilistic gradient-aware visibility indicators.
  • UDiFF (Zhou et al., 10 Apr 2024): Utilizes optimal, data-driven wavelet transforms for UDFs in the spatial-frequency domain, enabling 3D diffusion-based generative modeling conditioned on text or image prompts.
  • GaussianUDF (Li et al., 25 Mar 2025): Reconciles the gap between discrete 3D Gaussian splatting and implicit UDFs by overfitting thin surface Gaussians and employing gradient-based supervision for UDF field inference from dense image/splatting data.

These pipelines report improvements in Chamfer distance, normal consistency, and qualitative sharpness in reconstructing both open and closed 3D scenes and objects.

6. Practical Applications and Broader Implications

Domains leveraging UDFs include:

  • Scene completion and mapping: UDFs computed from point clouds or partial RGB-D/LiDAR input enable robust scene reconstruction without interior/exterior ambiguity, crucial for robotics and AR/VR in open or cluttered spaces (Richa et al., 2022).
  • Open/complex geometry digitization: Garment capture, anatomical modeling, and cultural heritage conservation benefit from UDF-based meshing, as open boundaries and complex self-intersecting features are naturally accommodated (Long et al., 2022, Ren et al., 2022, Lu et al., 3 Mar 2024).
  • Non-manifold interface extraction: MIND provides tools for inferring multi-phase boundaries, essential in composite materials, fluid flows, and medical imaging (Chen et al., 3 Jun 2025).
  • Generative modeling and editing: Wavelet-diffusion UDF methods underpin controllable, conditional generation of new open or layered shapes, which is not possible with watertight-centric SDFs (Zhou et al., 10 Apr 2024).

The development of efficiency- and accuracy-aware meshing (e.g., DCUDF2 (Chen et al., 30 Aug 2024), iterative networks (Stella et al., 21 Sep 2025)) enables consistent reconstruction quality at higher resolutions and for more challenging geometries.

7. Future Directions

Open research directions highlighted include:

  • Integrated, differentiable UDF meshing: Bridging the gap between implicit learning and downstream mesh supervision, as in MeshUDF and DCUDF2, will further close learning loops, particularly for sparse and real-world data (Guillard et al., 2021, Chen et al., 30 Aug 2024).
  • Handling the zero level set: New representations—such as hyperbolically-scaled fields (DUDF (Fainstein et al., 14 Feb 2024)) or gradient vector fields (GDF (Le et al., 29 Oct 2024))—aim to resolve non-differentiability at the surface, supporting both more stable learning and accurate normal/curvature computations.
  • Generalist and lightweight architectures: Approaches like LoSF-UDF (Hu et al., 1 Jul 2024) focus on localized geometric priors, attention fusion, and efficient model sizes, enabling rapid adaptation to varied datasets and robust operation on noisy, real-world data.
  • Topology-aware extraction for non-manifold geometry: Systems like MIND (Chen et al., 3 Jun 2025) and improvements in multi-labeled marching cubes address global spatial partitioning and interface construction for arbitrarily complex, possibly non-orientable surfaces, expanding UDF applicability beyond conventional graphics.

UDF-based representations, though fundamentally more challenging to learn and extract than their SDF counterparts, underpin a trajectory towards greater generality, fidelity, and flexibility in 3D vision and graphics pipelines, particularly as neural architectures and meshing algorithms continue to co-evolve.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Unsigned Distance Fields (UDFs).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube