Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Unsigned Distance Fields for Implicit Function Learning (2010.13938v1)

Published 26 Oct 2020 in cs.CV and cs.LG

Abstract: In this work we target a learnable output representation that allows continuous, high resolution outputs of arbitrary shape. Recent works represent 3D surfaces implicitly with a Neural Network, thereby breaking previous barriers in resolution, and ability to represent diverse topologies. However, neural implicit representations are limited to closed surfaces, which divide the space into inside and outside. Many real world objects such as walls of a scene scanned by a sensor, clothing, or a car with inner structures are not closed. This constitutes a significant barrier, in terms of data pre-processing (objects need to be artificially closed creating artifacts), and the ability to output open surfaces. In this work, we propose Neural Distance Fields (NDF), a neural network based model which predicts the unsigned distance field for arbitrary 3D shapes given sparse point clouds. NDF represent surfaces at high resolutions as prior implicit models, but do not require closed surface data, and significantly broaden the class of representable shapes in the output. NDF allow to extract the surface as very dense point clouds and as meshes. We also show that NDF allow for surface normal calculation and can be rendered using a slight modification of sphere tracing. We find NDF can be used for multi-target regression (multiple outputs for one input) with techniques that have been exclusively used for rendering in graphics. Experiments on ShapeNet show that NDF, while simple, is the state-of-the art, and allows to reconstruct shapes with inner structures, such as the chairs inside a bus. Notably, we show that NDF are not restricted to 3D shapes, and can approximate more general open surfaces such as curves, manifolds, and functions. Code is available for research at https://virtualhumans.mpi-inf.mpg.de/ndf/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Julian Chibane (10 papers)
  2. Aymen Mir (5 papers)
  3. Gerard Pons-Moll (81 papers)
Citations (299)

Summary

  • The paper introduces NDFs that predict unsigned distances to model both open and closed surfaces without the need for artifact-inducing pre-processing.
  • It develops efficient surface extraction algorithms that extend sphere tracing to produce dense point clouds, normals, and meshes from complex geometries.
  • Experimental results on ShapeNet show superior reconstruction performance, paving the way for versatile applications in 3D modeling and manifold learning.

Neural Unsigned Distance Fields for Implicit Function Learning

The paper "Neural Unsigned Distance Fields for Implicit Function Learning" presents a novel approach to 3D shape representation that addresses significant limitations of existing methods. Traditional neural implicit representations typically require shapes to be enclosed, utilizing Signed Distance Fields (SDFs) to demarcate inside and outside regions. This approach poses challenges when dealing with real-world objects, which often feature open surfaces or intricate internal structures that defy neat enclosure.

The authors introduce Neural Distance Fields (NDFs), which model the unsigned distance to the surface, offering a more flexible representation that accommodates open surfaces without the need for artificially closing them. This innovation considerably expands the class of shapes that can be represented, including manifolds, curves, and complex surfaces with internal details.

Key Contributions

  1. Introduction of NDFs: The primary contribution is the introduction of NDFs, which predict the unsigned distance to a surface. This allows neural networks to represent both closed and open surfaces, overcoming the limitations of SDF and occupancy-based methods that require pre-processing to achieve closure, often resulting in artifacts and loss of detail.
  2. Efficient Surface Extraction Algorithms: The paper proposes algorithms that enable dense point cloud, surface normal, and mesh extraction from NDFs. By leveraging properties such as gradient-evaluation efficiency, these methods extend existing techniques like sphere tracing to accommodate the nuances of unsigned distance fields.
  3. State-of-the-Art Performance: In experiments using ShapeNet, the proposed NDFs achieve superior performance in reconstructing the geometry of objects, particularly those with internal structures, compared to existing methods that rely on closed surface assumptions.
  4. Versatility Across Domains: Beyond shape representation, NDFs offer potential in fields such as function approximation and manifold learning, demonstrating robust capabilities in multi-target regression tasks, where they can capture complex data structures without averaging out multiple modes.

Implications and Future Directions

Practically, NDFs simplify the process of dealing with complex, real-world 3D data. They bypass the need for problematic data pre-processing steps traditionally necessary to close open shapes, preserving the integrity and details of the input datasets. This renders applications like virtual and augmented reality more seamless, with enhanced accuracy in environmental modeling.

Theoretically, the introduction of NDFs into the toolbox of 3D representation can stimulate further research into adaptive learning for geometric data, opening avenues for exploring unsolved challenges in machine learning and computer graphics. As the authors suggest, the potential expansion of NDF application to more general computational tasks, such as function regression through techniques adapted from classical ray tracing, underscores the broader applicability of their approach.

In conclusion, the paper provides a valuable contribution to implicit function learning by presenting a method that expands the representational capacity of deep learning frameworks beyond the constraints imposed by traditional signed and occupancy-based models. This approach not only achieves state-of-the-art results in existing benchmarks but also positions itself as a versatile tool for diverse applications in AI and computer vision.