Curvature-Aware Densification in Neural SDFs
- The paper introduces curvature-aware densification by integrating finite-difference approximations of second-order derivatives into neural SDF learning to achieve developable, feature-preserving surfaces.
- It leverages FD stencils for computing Gaussian and rank-deficiency curvature losses, significantly reducing memory requirements and computational overhead compared to higher-order automatic differentiation.
- Empirical evaluations demonstrate that the approach matches competitive accuracy while offering faster runtimes and robust performance under sparse sampling and incomplete data regimes.
Curvature-aware densification refers to the integration of curvature-sensitive regularization mechanisms into the learning of neural signed distance fields (SDFs), where the explicit modeling of surface curvature is employed to promote the reconstruction of developable, feature-preserving surfaces—even under sparse or incomplete sampling conditions. The finite-difference (FD) framework developed in "A Finite Difference Approximation of Second Order Regularization of Neural-SDFs" (Yin et al., 12 Nov 2025) enables this process through computationally efficient, second-order accurate approximations of differential geometric quantities, replacing costly higher-order automatic differentiation. The approach serves as a scalable and memory-efficient drop-in replacement for existing curvature regularization terms, supporting robust SDF learning across a range of geometric and data regimes.
1. Finite-Difference Stencils and Second-Order Accuracy
The FD framework approximates second derivatives required for curvature regularization through central difference stencils utilizing local Taylor expansions with truncation error . For a neural SDF evaluated at point , an orthonormal tangent frame is established, where . The directional second derivatives at are computed as follows:
The Taylor expansion confirms second-order accuracy:
yielding 0, and analogous expressions for 1 and 2.
2. Curvature-Aware Regularization Losses
Curvature-aware densification employs FD-derived proxies for surface regularization. The principal mechanisms are:
- FD Gaussian Curvature Loss: The Gaussian curvature at 3 is approximated as
4
Near the zero-level set, 5, simplifying 6. The associated loss is 7 or 8.
- FD Rank-Deficiency Loss: The rank-deficiency term is similarly 9, penalized as 0 or 1.
These losses target zero Gaussian curvature or rank-deficient Hessian matrices to favor developable or singular surfaces as dictated by reconstruction goals.
3. Step-Size Selection and Spatial Sampling
Optimal application of finite-difference regularization depends on careful choice of the spatial FD step-size 2 and sampling regime:
- Step Size: Empirically, 3 or smaller captures fine detail, balancing truncation error and numerical noise.
- Sampling Scheme: Shell points 4 are uniformly sampled in the bounding box (20k per iteration typical), with near-surface projection via 5. A local tangent frame is constructed at each 6 using the normal 7, followed by random tangent directions 8, 9. FD stencils require evaluation at eight neighboring positions for mixed derivatives.
Surface-anchored Dirichlet samples 0 (1) anchor known geometry, while off-surface samples stabilize curvature estimates through dense coverage.
4. Integrated Training Objective and Hyperparameters
Curvature-aware densification is realized within a composite loss function:
2
where:
- 3 aligns network predictions to observed surface,
- 4 (Atzmon & Lipman SAL++) penalizes non-manifold solutions,
- 5 enforces signed-distance constraint,
- 6 uses 7 or 8 from the FD framework.
Hyperparameters typically are 9, 0, 1. A linear warm-up for 2 over the first few thousand iterations mitigates early training oscillations.
5. Algorithmic Workflow, Complexity, and Memory Profiling
A typical training iteration proceeds as follows:
- Point Sampling: Sample 3 surface and 4 off-surface points.
- Forward Pass: Evaluate 5 and 6 at sampled locations.
- Curvature Stencil Computation: For each 7, calculate FD stencils using tangent vectors 8, and evaluate 9 at required offsets.
- Loss Evaluation: Compute 0, 1, and curvature loss 2, aggregate according to hyperparameters.
- Backpropagation: Update network parameters using only first-order gradients.
The FD method demands approximately 9 forward passes per 3 plus one backward gradient calculation. Memory usage scales as 4, in contrast to 5 for full Hessian autodiff. FD typically halves memory requirements and yields training speeds 1.3–2× faster than second-order differentiation.
6. Empirical Performance and Robustness
Evaluations on ABC subsets (100 shapes, “1 MB” random and “5 MB” curated) establish the FD method’s parity with autodiff proxies:
- Accuracy: FD-NSH and FD-NCR losses match or marginally trail NeurCADRecon/NSH in Chamfer Distance (CD), F1, and Normal Consistency (NC).
- Efficiency: Example metrics for 1 MB set on H100 GPU:
| Method | Chamfer D. | Normal Cons. | Time (s) | Mem (GB) |
|---|---|---|---|---|
| NSH | 2.74 | 93.93% | 559 | 6.1 |
| NSH-FD | 2.93 | 94.96% | 363 | 4.3 |
| NCR | 2.65 | 93.71% | 391 | 6.06 |
| NCR-FD | 4.10 | 93.41% | 331 | 4.03 |
- Sparse/Incomplete Data: Reconstruction degrades gracefully to 6k points, with strong errors only for extremely sparse (7k) sampling. Incomplete point clouds yield increased CD (+64%), minor reduction in NC (−0.7%), with topology preserved. On non-CAD shapes (Stanford Armadillo), FD reduces runtime by a factor of 1.9, with comparable reconstruction fidelity.
Ablation studies identify 8 values in 9 as optimal, robust to variation (0 to 1).
7. Practical Recommendations and Limitations
Best practices for curvature-aware densification include:
- Selecting 2 to match the smallest feature scale (0.1–1% bounding-box diagonal); excessive 3 increases truncation error, while overly small 4 amplifies noise.
- Ensuring dense off-surface sampling (≥10k per iteration) for stable curvature estimation.
- Implementing a gradual ramp-up of 5 after initial 500–1k iterations to regularize learning dynamics.
- Recognizing benefits: FD decreases GPU memory by approximately 30–40%, reduces wall-clock time by up to 2×, and is compatible as a drop-in regularization replacement.
Limitations entail increased per-iteration forward calls (~8 additional evaluations per sample), with overall faster convergence compared to full second-order approaches. Performance is sensitive to choices of 6 and off-surface point distribution, necessitating minor hyperparameter tuning.
In summary, finite-difference-based curvature-aware densification constitutes a simple, second-order-accurate, and memory-efficient approach for Gaussian and rank-deficiency regularization in neural-SDF reconstruction, supporting developable, feature-preserving surface synthesis even in regimes of sparse or incomplete geometric input (Yin et al., 12 Nov 2025).