Curvature-Aware Densification in Neural SDFs
- The paper introduces curvature-aware densification by integrating finite-difference approximations of second-order derivatives into neural SDF learning to achieve developable, feature-preserving surfaces.
- It leverages FD stencils for computing Gaussian and rank-deficiency curvature losses, significantly reducing memory requirements and computational overhead compared to higher-order automatic differentiation.
- Empirical evaluations demonstrate that the approach matches competitive accuracy while offering faster runtimes and robust performance under sparse sampling and incomplete data regimes.
Curvature-aware densification refers to the integration of curvature-sensitive regularization mechanisms into the learning of neural signed distance fields (SDFs), where the explicit modeling of surface curvature is employed to promote the reconstruction of developable, feature-preserving surfaces—even under sparse or incomplete sampling conditions. The finite-difference (FD) framework developed in "A Finite Difference Approximation of Second Order Regularization of Neural-SDFs" (Yin et al., 12 Nov 2025) enables this process through computationally efficient, second-order accurate approximations of differential geometric quantities, replacing costly higher-order automatic differentiation. The approach serves as a scalable and memory-efficient drop-in replacement for existing curvature regularization terms, supporting robust SDF learning across a range of geometric and data regimes.
1. Finite-Difference Stencils and Second-Order Accuracy
The FD framework approximates second derivatives required for curvature regularization through central difference stencils utilizing local Taylor expansions with truncation error . For a neural SDF evaluated at point %%%%2%%%%, an orthonormal tangent frame is established, where . The directional second derivatives at are computed as follows:
The Taylor expansion confirms second-order accuracy:
yielding , and analogous expressions for and .
2. Curvature-Aware Regularization Losses
Curvature-aware densification employs FD-derived proxies for surface regularization. The principal mechanisms are:
- FD Gaussian Curvature Loss: The Gaussian curvature at is approximated as
Near the zero-level set, , simplifying . The associated loss is or .
- FD Rank-Deficiency Loss: The rank-deficiency term is similarly , penalized as or .
These losses target zero Gaussian curvature or rank-deficient Hessian matrices to favor developable or singular surfaces as dictated by reconstruction goals.
3. Step-Size Selection and Spatial Sampling
Optimal application of finite-difference regularization depends on careful choice of the spatial FD step-size and sampling regime:
- Step Size: Empirically, or smaller captures fine detail, balancing truncation error and numerical noise.
- Sampling Scheme: Shell points are uniformly sampled in the bounding box (20k per iteration typical), with near-surface projection via . A local tangent frame is constructed at each using the normal , followed by random tangent directions , . FD stencils require evaluation at eight neighboring positions for mixed derivatives.
Surface-anchored Dirichlet samples () anchor known geometry, while off-surface samples stabilize curvature estimates through dense coverage.
4. Integrated Training Objective and Hyperparameters
Curvature-aware densification is realized within a composite loss function:
where:
- aligns network predictions to observed surface,
- (Atzmon & Lipman SAL++) penalizes non-manifold solutions,
- enforces signed-distance constraint,
- uses or from the FD framework.
Hyperparameters typically are , , . A linear warm-up for over the first few thousand iterations mitigates early training oscillations.
5. Algorithmic Workflow, Complexity, and Memory Profiling
A typical training iteration proceeds as follows:
- Point Sampling: Sample surface and off-surface points.
- Forward Pass: Evaluate and at sampled locations.
- Curvature Stencil Computation: For each , calculate FD stencils using tangent vectors , and evaluate at required offsets.
- Loss Evaluation: Compute , , and curvature loss , aggregate according to hyperparameters.
- Backpropagation: Update network parameters using only first-order gradients.
The FD method demands approximately 9 forward passes per plus one backward gradient calculation. Memory usage scales as , in contrast to for full Hessian autodiff. FD typically halves memory requirements and yields training speeds 1.3–2× faster than second-order differentiation.
6. Empirical Performance and Robustness
Evaluations on ABC subsets (100 shapes, “1 MB” random and “5 MB” curated) establish the FD method’s parity with autodiff proxies:
- Accuracy: FD-NSH and FD-NCR losses match or marginally trail NeurCADRecon/NSH in Chamfer Distance (CD), F1, and Normal Consistency (NC).
- Efficiency: Example metrics for 1 MB set on H100 GPU:
| Method | Chamfer D. | Normal Cons. | Time (s) | Mem (GB) |
|---|---|---|---|---|
| NSH | 2.74 | 93.93% | 559 | 6.1 |
| NSH-FD | 2.93 | 94.96% | 363 | 4.3 |
| NCR | 2.65 | 93.71% | 391 | 6.06 |
| NCR-FD | 4.10 | 93.41% | 331 | 4.03 |
- Sparse/Incomplete Data: Reconstruction degrades gracefully to $5$k points, with strong errors only for extremely sparse ($1$k) sampling. Incomplete point clouds yield increased CD (+64%), minor reduction in NC (−0.7%), with topology preserved. On non-CAD shapes (Stanford Armadillo), FD reduces runtime by a factor of 1.9, with comparable reconstruction fidelity.
Ablation studies identify values in as optimal, robust to variation ($0.2$ to $5$).
7. Practical Recommendations and Limitations
Best practices for curvature-aware densification include:
- Selecting to match the smallest feature scale (0.1–1% bounding-box diagonal); excessive increases truncation error, while overly small amplifies noise.
- Ensuring dense off-surface sampling (≥10k per iteration) for stable curvature estimation.
- Implementing a gradual ramp-up of after initial 500–1k iterations to regularize learning dynamics.
- Recognizing benefits: FD decreases GPU memory by approximately 30–40%, reduces wall-clock time by up to 2×, and is compatible as a drop-in regularization replacement.
Limitations entail increased per-iteration forward calls (~8 additional evaluations per sample), with overall faster convergence compared to full second-order approaches. Performance is sensitive to choices of and off-surface point distribution, necessitating minor hyperparameter tuning.
In summary, finite-difference-based curvature-aware densification constitutes a simple, second-order-accurate, and memory-efficient approach for Gaussian and rank-deficiency regularization in neural-SDF reconstruction, supporting developable, feature-preserving surface synthesis even in regimes of sparse or incomplete geometric input (Yin et al., 12 Nov 2025).