Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 189 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

FlatCAD: Curvature-Regularized Neural SDFs

Updated 10 November 2025
  • FlatCAD is a curvature-regularized neural SDF learning framework that biases surfaces toward developability by penalizing the mixed Weingarten term.
  • It employs finite-difference and autodiff proxies to approximate curvature without full Hessian evaluation, significantly reducing memory and runtime.
  • Empirical evaluations on CAD benchmarks show superior Normal Consistency, lower Chamfer Distance, and higher F1 scores compared to traditional methods.

FlatCAD is a curvature-regularized approach to neural signed-distance field (SDF) learning tailored for computer-aided design (CAD) geometry. It introduces an efficient curvature proxy that targets only the off-diagonal (mixed) Weingarten term, allowing scalable enforcement of developable, CAD-style surface behavior in neural SDFs. FlatCAD eliminates the need for full Hessian evaluation and second-order automatic differentiation, reducing both memory footprint and computational costs while maintaining or improving geometric fidelity on engineering-grade shape reconstruction tasks.

1. Mathematical Foundations of Curvature Regularization

FlatCAD operates on the implicit neural surface representation: f:R3R,S={xf(x)=0}f: \mathbb{R}^3 \to \mathbb{R}, \quad \mathcal{S} = \{x \mid f(x) = 0\} where ff is a multilayer perceptron (MLP) realized SDF. The first derivative f(x)\nabla f(x) provides surface normals: n(x)=f(x)f(x)n(x) = \frac{\nabla f(x)}{\|\nabla f(x)\|} satisfying the eikonal constraint f=1\|\nabla f\| = 1 near S\mathcal{S}. Curvature information is encoded in the Hessian Hf(x)=2f(x)H_f(x) = \nabla^2 f(x), whose restriction to the tangent plane forms the shape (Weingarten) operator: S=(uTHfuuTHfv vTHfuvTHfv)S = \begin{pmatrix} u^T H_f u & u^T H_f v \ v^T H_f u & v^T H_f v \end{pmatrix} with (u,v)(u, v) an orthonormal basis for the tangent space at xx. Principal curvatures κ1,κ2\kappa_1, \kappa_2 are the eigenvalues of SS, and Gaussian curvature K=κ1κ2=detSK = \kappa_1 \kappa_2 = \det S.

FlatCAD's core innovation is to penalize only the off-diagonal (mixed) term, S12=uTHfvS_{12} = u^T H_f v. Under rotation by θ\theta, S12S_{12} becomes: S12(θ)=12(κ2κ1)sin2θS_{12}(\theta) = \tfrac12 (\kappa_2 - \kappa_1) \sin 2\theta and its squared expectation over random θ\theta yields

Eθ[S122]=(κ2κ1)28\mathbb{E}_\theta[S_{12}^2] = \frac{(\kappa_2 - \kappa_1)^2}{8}

Regularizing S12|S_{12}| biases the surface locally toward developability (κ1κ2\kappa_1 \approx \kappa_2), suppressing warp while allowing uniform bending or flattening, as desired in CAD reconstructions.

The full FlatCAD training loss: Ltotal=LDM+λDNMLDNM+λeikLeik+λproxyLproxy\mathcal{L}_\mathrm{total} = \mathcal{L}_\mathrm{DM} + \lambda_\mathrm{DNM}\mathcal{L}_\mathrm{DNM} + \lambda_\mathrm{eik}\mathcal{L}_\mathrm{eik} + \lambda_\mathrm{proxy} \mathcal{L}_\mathrm{proxy} comprises a on-surface Dirichlet term LDM\mathcal{L}_\mathrm{DM}, an off-surface term LDNM\mathcal{L}_\mathrm{DNM} (with α=100\alpha=100), an eikonal term Leik\mathcal{L}_\mathrm{eik}, and the curvature proxy Lproxy\mathcal{L}_\mathrm{proxy} applied on a shell of LL near-surface points via

Lproxy=1L=1LuTHf(p)vf(p)\mathcal{L}_\mathrm{proxy} = \frac{1}{L} \sum_{\ell=1}^L \left|\frac{u_\ell^T H_f(p_\ell) v_\ell}{\|\nabla f(p_\ell)\|}\right|

Recommended weights are λDM=7000\lambda_\mathrm{DM}=7000, λDNM=600\lambda_\mathrm{DNM}=600, λeik=50\lambda_\mathrm{eik}=50, and λproxy=10\lambda_\mathrm{proxy}=10.

2. Implementation Strategies for the Curvature Proxy

FlatCAD enables two operationally distinct but mathematically equivalent instantiations for the off-diagonal curvature penalty:

2.1. Finite-Difference Proxy

This approach approximates the mixed second-order derivative uTHfvu^T H_f v without explicit Hessian computation. For a shell point xΩx_\Omega and step hh:

  • f0=f(xΩ)f_0 = f(x_\Omega)
  • fu=f(xΩ+hu)f_u = f(x_\Omega + h u)
  • fv=f(xΩ+hv)f_v = f(x_\Omega + h v)
  • fuv=f(xΩ+hu+hv)f_{uv} = f(x_\Omega + h u + h v)

The mixed-difference stencil yields: Duv(+)(xΩ)=fuvfufv+f0h2=uTHf(xΩ)v+O(h)D_{uv}^{(+)}(x_\Omega) = \frac{f_{uv} - f_u - f_v + f_0}{h^2} = u^T H_f(x_\Omega)v + O(h) Thus, the curvature proxy at xΩx_\Omega is

S^12(xΩ)=Duv(+)f(xΩ)+O(h)\widehat{S}_{12}(x_\Omega) = \frac{D_{uv}^{(+)}}{\|\nabla f(x_\Omega)\|} + O(h)

requiring four forward SDF evaluations per proxy point and one gradient, with O(h)O(h) error vanishing as h0h\rightarrow 0.

2.2. Autodiff (Hessian-Vector Product) Proxy

This variant leverages vector-Hessian products and reverse-mode autodiff without forming the full Hessian:

  1. Compute f(xΩ)f(x_\Omega) (forward).
  2. Get g=f(xΩ)g = \nabla f(x_\Omega) (reverse).
  3. Form gv=gvg_v = g \cdot v.
  4. Compute hv=x(gv)=Hf(xΩ)vh_v = \nabla_x(g_v) = H_f(x_\Omega) v (second reverse).
  5. Contract: mixed=uhvmixed = u \cdot h_v.
  6. Normalize: S12=mixed/gS_{12} = mixed / \|g\|. The cost is two backward sweeps per point, with no explicit Hessian storage.

Pseudocode

1
2
3
4
5
6
7
8
9
for each proxy point x:
    f0 = f(x)
    g = grad(f0, x)
    gv = dot(g, v)
    hv = grad(gv, x)
    mixed = dot(u, hv)
    S12 = mixed / norm(g)
    L_proxy += abs(S12)
L_proxy /= L

Editor's term: Finite-difference (Proxy-FD) and autodiff (Proxy-AD) proxies exhibit near-identical accuracy.

3. Practical Training Loop and Computational Complexity

A single FlatCAD training iteration comprises:

  1. Sampling NN on-surface points for LDM\mathcal{L}_\mathrm{DM}.
  2. Sampling MM free-space points for LDNM\mathcal{L}_\mathrm{DNM}.
  3. Sampling KK points for Leik\mathcal{L}_\mathrm{eik}.
  4. Drawing LL shell points with tangent frames for Lproxy\mathcal{L}_\mathrm{proxy}.
  5. Forward pass: f()f(\cdot) at all points; f()\nabla f(\cdot) at eikonal and proxy points.
  6. Loss computation and gradient update.

In traditional full-Hessian Gaussian curvature regularization (as in NeurCADRecon), each sample requires all six independent second derivatives—necessitating six Hessian-vector products per point—which leads to large memory graphs and prohibitive GPU utilization.

By contrast:

  • Proxy-AD requires only one Hessian-vector product (two backward sweeps) per proxy point, with memory scaling comparable to standard first-order autodiff.
  • Proxy-FD forgoes second-order graphs entirely—using only four forward SDF evaluations and a single gradient per proxy point.

Empirical complexity (NVIDIA H100, $1$MB ABC subset):

Method Iter Time (ms) Conv Time (s) GPU Mem (GB)
DiGS 2.99 289.8 1.61
NSH 1.84 151.8 1.79
NeurCADRecon 5.60 455.2 6.06
Proxy-AD 2.54 191.8 3.46
Proxy-FD 3.13 172.7 3.69

Both FlatCAD variants roughly halve memory and wall-clock time compared to NeurCADRecon, with minimal impact on convergence behavior.

4. Empirical Evaluation on CAD Datasets

FlatCAD has been validated on the ABC benchmark in two regimes: "1 MB set" (100 random CAD parts, \approx1 MB) and "5 MB set" (100 hand-selected models, \approx5 MB). Core metrics are Normal Consistency (NC; higher is better, ×102\times10^2), Chamfer Distance (CD; lower is better, ×103\times10^{-3}), and F1 score (×102\times10^2).

On the 5 MB set:

Method NC ↑ CD ↓ F1 ↑
NeurCADRecon 96.83 5.94 81.04
Proxy-AD 97.14 5.27 86.56
Proxy-FD 97.38 4.93 85.86

Proxy-FD attains the best NC and CD, while Proxy-AD leads F1; both outperform NeurCADRecon in accuracy and efficiency. Performance persists under data sparsity: FlatCAD remains robust down to 5 K input points; only at 1 K do reconstructions degrade notably. The proxy's developability bias supports plausible hole-filling.

Ablation studies show that the curvature proxy weight λproxy\lambda_\mathrm{proxy} in {0.1,1,10,100}\{0.1, 1, 10, 100\} always induces smooth, CAD-like surfaces, with λ=10\lambda=10 providing the best accuracy-to-smoothness balance. Proxy-AD and Proxy-FD differ in runtime by less than 5 %.

5. Extensions: Scheduling the Weingarten Proxy Weight

FlatCAD's original formulation applied a constant off-diagonal Weingarten (ODW) penalty. Later work demonstrated that dynamically scheduling the ODW weight during training improves both stability and fidelity ["Scheduling the Off-Diagonal Weingarten Loss of Neural SDFs for CAD Models" (Yin et al., 5 Nov 2025)].

Five ODW weight schedules were formalized for λODW(t)\lambda_\mathrm{ODW}(t) with t[0,1]t\in[0,1] as normalized training progress:

  • Constant: λODW(t)=10\lambda_\mathrm{ODW}(t) = 10 (FlatCAD baseline).
  • Linear decay: plateau at 10 (t0.2t\leq0.2), linearly to $0.001$ at t=0.5t=0.5, then to 0 at t=1t=1.
  • Quintic ("smooth") decay: slow easing from 10 to $0.001$ between t=0.2t=0.2 and $0.5$, then to 0.
  • Step interpolation: 10 for t<0.5t<0.5, $0.001$ for 0.5t<10.5\leq t<1, then 0.
  • Warm-up (increasing linear): starts at 0, ramps up to 10 at t=1t=1.

On the ABC benchmark (25 models):

Schedule NC CD F1 Time (s)
FlatCAD (const.) 96.14 4.37 84.98 877.5
Linear decay 97.95 3.05 90.59 882.7
Quintic interp. 98.01 2.86 92.72 878.2
Step 97.99 2.87 92.71 1003.5

Decay schedules yield 30–35% lower Chamfer Distance and improve F1 and NC. Quintic is optimal in stability and detail recovery, while warm-up (reverse schedule) degrades performance. Strong initial regularization stabilizes optimization, suppresses curvature noise, and facilitates eventual detail capture as the penalty decays.

6. Implications for CAD Reconstruction and Engineering Applications

FlatCAD, by decoupling the geometric bias (developability, minimal warp) from computational bottlenecks (full Hessian graphs), enables practical large-scale neural SDF learning for complex CAD surfaces. Its framework-agnostic, drop-in nature supports deployment in existing geometric learning stacks with minimal implementation effort. Robust behavior in data-sparse regimes and superior topology preservation recommend FlatCAD as a default regularizer for neural geometric reconstruction in engineering workflows.

The curvature proxy's parameter-free, purely geometric construction and tunable scheduling argue for its continued relevance as SDFs expand as a modeling primitive across CAD, reverse engineering, and shape optimization contexts. The separation of computational and geometric concerns in FlatCAD suggests the design space of higher-order regularization proxies for neural implicit methods is not yet exhausted.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to FlatCAD.