Papers
Topics
Authors
Recent
2000 character limit reached

Curvature-Aware Densification in Neural SDFs

Updated 6 January 2026
  • The paper introduces curvature-aware densification by integrating finite-difference approximations of second-order derivatives into neural SDF learning to achieve developable, feature-preserving surfaces.
  • It leverages FD stencils for computing Gaussian and rank-deficiency curvature losses, significantly reducing memory requirements and computational overhead compared to higher-order automatic differentiation.
  • Empirical evaluations demonstrate that the approach matches competitive accuracy while offering faster runtimes and robust performance under sparse sampling and incomplete data regimes.

Curvature-aware densification refers to the integration of curvature-sensitive regularization mechanisms into the learning of neural signed distance fields (SDFs), where the explicit modeling of surface curvature is employed to promote the reconstruction of developable, feature-preserving surfaces—even under sparse or incomplete sampling conditions. The finite-difference (FD) framework developed in "A Finite Difference Approximation of Second Order Regularization of Neural-SDFs" (Yin et al., 12 Nov 2025) enables this process through computationally efficient, second-order accurate approximations of differential geometric quantities, replacing costly higher-order automatic differentiation. The approach serves as a scalable and memory-efficient drop-in replacement for existing curvature regularization terms, supporting robust SDF learning across a range of geometric and data regimes.

1. Finite-Difference Stencils and Second-Order Accuracy

The FD framework approximates second derivatives required for curvature regularization through central difference stencils utilizing local Taylor expansions with truncation error O(h2)O(h^2). For a neural SDF f:R3Rf: \mathbb{R}^3 \to \mathbb{R} evaluated at point %%%%2%%%%, an orthonormal tangent frame (u,v)n(u,v) \perp n is established, where n=f/fn = \nabla f / \|\nabla f\|. The directional second derivatives at x0x_0 are computed as follows:

  • fuuf(x0+hu)2f(x0)+f(x0hu)h2f_{uu} \approx \frac{f(x_0 + h u) - 2 f(x_0) + f(x_0 - h u)}{h^2}
  • fvvf(x0+hv)2f(x0)+f(x0hv)h2f_{vv} \approx \frac{f(x_0 + h v) - 2 f(x_0) + f(x_0 - h v)}{h^2}
  • fuvf(x0+hu+hv)f(x0+huhv)f(x0hu+hv)+f(x0huhv)4h2f_{uv} \approx \frac{f(x_0 + h u + h v) - f(x_0 + h u - h v) - f(x_0 - h u + h v) + f(x_0 - h u - h v)}{4 h^2}

The Taylor expansion confirms second-order accuracy:

f(x0±hu)=f(x0)±hfu+h22uTHfu±h36D3f(u,u,u)+O(h4),f(x_0 \pm h u) = f(x_0) \pm h \nabla f \cdot u + \frac{h^2}{2} u^T H_f u \pm \frac{h^3}{6} D^3 f(u,u,u) + O(h^4),

yielding fuu=uTHfu+O(h2)f_{uu} = u^T H_f u + O(h^2), and analogous expressions for fvvf_{vv} and fuvf_{uv}.

2. Curvature-Aware Regularization Losses

Curvature-aware densification employs FD-derived proxies for surface regularization. The principal mechanisms are:

  • FD Gaussian Curvature Loss: The Gaussian curvature at x0x_0 is approximated as

KFD(x0)=fuufvvfuv2f(x0)4.K_{FD}(x_0) = \frac{f_{uu} f_{vv} - f_{uv}^2}{\|\nabla f(x_0)\|^4}.

Near the zero-level set, f1\|\nabla f\| \approx 1, simplifying KFDfuufvvfuv2K_{FD} \approx f_{uu} f_{vv} - f_{uv}^2. The associated loss is LG=Ex0[KFD(x0)]L_G = \mathbb{E}_{x_0}[|K_{FD}(x_0)|] or E[KFD(x0)2]\mathbb{E}[K_{FD}(x_0)^2].

  • FD Rank-Deficiency Loss: The rank-deficiency term is similarly DFD(x0)=fuufvvfuv2D_{FD}(x_0) = f_{uu} f_{vv} - f_{uv}^2, penalized as LR=Ex0[DFD(x0)]L_R = \mathbb{E}_{x_0}[|D_{FD}(x_0)|] or E[DFD(x0)2]\mathbb{E}[D_{FD}(x_0)^2].

These losses target zero Gaussian curvature or rank-deficient Hessian matrices to favor developable or singular surfaces as dictated by reconstruction goals.

3. Step-Size Selection and Spatial Sampling

Optimal application of finite-difference regularization depends on careful choice of the spatial FD step-size hh and sampling regime:

  • Step Size: Empirically, h103diameter(bounding box)h \simeq 10^{-3} \cdot \text{diameter(bounding box)} or smaller captures fine detail, balancing truncation error and numerical noise.
  • Sampling Scheme: Shell points x0x_0 are uniformly sampled in the bounding box (20k per iteration typical), with near-surface projection via ff. A local tangent frame is constructed at each x0x_0 using the normal nn, followed by random tangent directions unu \perp n, v=n×uv = n \times u. FD stencils require evaluation at eight neighboring positions for mixed derivatives.

Surface-anchored Dirichlet samples xsx_s (f(xs)0|f(x_s)| \rightarrow 0) anchor known geometry, while off-surface samples stabilize curvature estimates through dense coverage.

4. Integrated Training Objective and Hyperparameters

Curvature-aware densification is realized within a composite loss function:

Ltotal=LDM+λDNMLDNM+λeikLeik+λfdLfd,L_{\text{total}} = L_{DM} + \lambda_{DNM} L_{DNM} + \lambda_{\text{eik}} L_{\text{eik}} + \lambda_{fd} L_{fd},

where:

  • LDM=Exs[f(xs)]L_{DM} = \mathbb{E}_{x_s}[|f(x_s)|] aligns network predictions to observed surface,
  • LDNML_{DNM} (Atzmon & Lipman SAL++) penalizes non-manifold solutions,
  • Leik=Ex[(f(x)1)2]L_{\text{eik}} = \mathbb{E}_x[(\|\nabla f(x)\| - 1)^2] enforces signed-distance constraint,
  • LfdL_{fd} uses LGL_G or LRL_R from the FD framework.

Hyperparameters typically are λDNM=0.01\lambda_{DNM} = 0.01, λeik=0.1\lambda_{\text{eik}} = 0.1, λfd[0.4,1.0]\lambda_{fd} \in [0.4,1.0]. A linear warm-up for λfd\lambda_{fd} over the first few thousand iterations mitigates early training oscillations.

5. Algorithmic Workflow, Complexity, and Memory Profiling

A typical training iteration proceeds as follows:

  1. Point Sampling: Sample NsurfN_\text{surf} surface and NoffN_\text{off} off-surface points.
  2. Forward Pass: Evaluate ff and f\nabla f at sampled locations.
  3. Curvature Stencil Computation: For each x0Xoffx_0 \in X_{\text{off}}, calculate FD stencils using tangent vectors u,vu, v, and evaluate ff at required offsets.
  4. Loss Evaluation: Compute LDML_{DM}, LeikL_{\text{eik}}, and curvature loss LfdL_{fd}, aggregate according to hyperparameters.
  5. Backpropagation: Update network parameters using only first-order gradients.

The FD method demands approximately 9 forward passes per x0x_0 plus one backward gradient calculation. Memory usage scales as O(batch sizecostfirst-order)O(\text{batch size} \cdot \text{cost}_{\text{first-order}}), in contrast to O(batch sizenparams)O(\text{batch size} \cdot n_{\text{params}}) for full Hessian autodiff. FD typically halves memory requirements and yields training speeds 1.3–2× faster than second-order differentiation.

6. Empirical Performance and Robustness

Evaluations on ABC subsets (100 shapes, “1 MB” random and “5 MB” curated) establish the FD method’s parity with autodiff proxies:

  • Accuracy: FD-NSH and FD-NCR losses match or marginally trail NeurCADRecon/NSH in Chamfer Distance (CD), F1, and Normal Consistency (NC).
  • Efficiency: Example metrics for 1 MB set on H100 GPU:
Method Chamfer D. Normal Cons. Time (s) Mem (GB)
NSH 2.74 93.93% 559 6.1
NSH-FD 2.93 94.96% 363 4.3
NCR 2.65 93.71% 391 6.06
NCR-FD 4.10 93.41% 331 4.03
  • Sparse/Incomplete Data: Reconstruction degrades gracefully to $5$k points, with strong errors only for extremely sparse ($1$k) sampling. Incomplete point clouds yield increased CD (+64%), minor reduction in NC (−0.7%), with topology preserved. On non-CAD shapes (Stanford Armadillo), FD reduces runtime by a factor of 1.9, with comparable reconstruction fidelity.

Ablation studies identify λfd\lambda_{fd} values in [0.6,1.0][0.6, 1.0] as optimal, robust to variation ($0.2$ to $5$).

7. Practical Recommendations and Limitations

Best practices for curvature-aware densification include:

  • Selecting hh to match the smallest feature scale (0.1–1% bounding-box diagonal); excessive hh increases truncation error, while overly small hh amplifies noise.
  • Ensuring dense off-surface sampling (≥10k per iteration) for stable curvature estimation.
  • Implementing a gradual ramp-up of λfd\lambda_{fd} after initial 500–1k iterations to regularize learning dynamics.
  • Recognizing benefits: FD decreases GPU memory by approximately 30–40%, reduces wall-clock time by up to 2×, and is compatible as a drop-in regularization replacement.

Limitations entail increased per-iteration forward calls (~8 additional evaluations per sample), with overall faster convergence compared to full second-order approaches. Performance is sensitive to choices of hh and off-surface point distribution, necessitating minor hyperparameter tuning.

In summary, finite-difference-based curvature-aware densification constitutes a simple, second-order-accurate, and memory-efficient approach for Gaussian and rank-deficiency regularization in neural-SDF reconstruction, supporting developable, feature-preserving surface synthesis even in regimes of sparse or incomplete geometric input (Yin et al., 12 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Curvature-Aware Densification.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube