Papers
Topics
Authors
Recent
2000 character limit reached

Geometric Neural Operators (GNPs)

Updated 12 January 2026
  • Geometric Neural Operators (GNPs) are models that map raw point cloud data to intrinsic geometric features on manifolds.
  • They employ a three-stage architecture—encoding, operator layers, and decoding—to extract metrics, curvature, and normals without explicit mesh construction.
  • GNPs offer robust numerical solutions for geometric PDEs and serve as reusable, pretrained components in geometry-centric workflows.

Geometric Neural Operators (GNPs) are neural operator models that encode the intrinsic differential geometry of manifolds directly from unstructured point cloud representations, providing robust estimates of local and global geometric features and facilitating solutions of geometric partial differential equations (PDEs). As foundation models, GNPs serve as reusable, pretrained components for diverse geometry-centric machine learning tasks and numerical pipelines, requiring no explicit mesh construction or retraining for new shapes or noisy observations (Quackenbush et al., 6 Mar 2025, Quackenbush et al., 2024).

1. Mathematical Foundations and Operator Definition

GNPs formalize the mapping from raw local geometric data to fields of geometric quantities within a non-Euclidean domain. Consider MR3M\subset\mathbb{R}^3 as a smooth 2-manifold, sampled by a point cloud X={xi}X=\{x_i\}. Neighborhoods Nϵ(x0)N_\epsilon(x_0) are constructed, and local tangent-normal coordinates (u,v)(u,v) are introduced via the Monge–Gauge patch

σ(u,v)=xˉ+uψ1+vψ2+s(u,v)ψ3,\sigma(u,v)=\bar{x}+u\psi_1+v\psi_2+s(u,v)\psi_3,

where {ψ1,ψ2}\{\psi_1,\psi_2\} span the tangent plane and ψ3\psi_3 is the local normal. All differential geometry is coded in the height function s(u,v)s(u,v) and its derivatives.

The GNP is a nonlinear operator on function spaces:

Gθ:C(Nϵ(x0);Rda)C(Nϵ(x0);Rdu),\mathcal{G}_\theta:C^\infty(N_\epsilon(x_0);\mathbb{R}^{d_a})\to C^\infty(N_\epsilon(x_0);\mathbb{R}^{d_u}),

mapping input functions (coordinate offsets, normals, etc.) to output feature fields (e.g., predicted s(u,v)s(u,v), metric tensor, curvature, normals). Key geometric quantities are extracted using closed-form expressions:

  • Metric tensor: gij=[E,F;F,G]g_{ij}=[E,F;F,G], where E=σuσuE=\sigma_u\cdot\sigma_u, F=σuσvF=\sigma_u\cdot\sigma_v, G=σvσvG=\sigma_v\cdot\sigma_v.
  • Second fundamental form: II=[L,M;M,N]II=[L,M;M,N] with L=σuunL=\sigma_{uu}\cdot n; shape operator W=g1IIW=g^{-1}II.
  • Gaussian curvature: K=detWK=\det W; mean curvature H=½trWH=½\operatorname{tr}W.
  • Laplace–Beltrami operator:

ΔLBu=1gi(gijgju).\Delta_{LB}u=\frac{1}{\sqrt{|g|}}\partial_i\big(g^{ij}\sqrt{|g|}\partial_j u\big).

The operator perspective enables feature extraction and geometric PDE analyses directly on point clouds (Quackenbush et al., 6 Mar 2025).

2. Model Architecture: Encoding, Operator Layers, and Decoding

The canonical GNP architecture comprises three stages: A. Lifting (Encoding): A linear map P\mathcal{P} lifts point-wise inputs a(xi)Rdaa(x_i)\in\mathbb{R}^{d_a} to features v0(xi)Rdvv_0(x_i)\in\mathbb{R}^{d_v}. B. Operator Layers (Message Passing): Stacked operator layers execute nonlinear updates:

vt+1(xj)=σ(Wtvt(xj)+Kt[vt](xj)+bt(xj))v_{t+1}(x_j) = \sigma\big(W_t v_t(x_j) + \mathcal{K}_t[v_t](x_j) + b_t(x_j)\big)

where WtW_t are pointwise linear operators, σ\sigma is a nonlinearity (ReLU), btb_t is a bias, and Kt\mathcal{K}_t is a discrete kernel (integral) operator performing local message passing averaged over neighborhoods, parameterized by small fully connected nets with block-factorization and Nystrom-style sampling.

C. Projection (Decoding): The final latent vTv_T is average-pooled and mapped via two fully connected layers to polynomial coefficients {cij}\{c_{ij}\}, reconstructing the local surface:

h^(u,v)=i,j=0Nciji(u)j(v)\hat{h}(u,v)=\sum_{i,j=0}^N c_{ij}\ell_i(u)\ell_j(v)

where {i}\{\ell_i\} are basis polynomials. Predicted geometric outputs are then produced analytically from h^(u,v)\hat{h}(u,v) (Quackenbush et al., 6 Mar 2025, Quackenbush et al., 2024).

3. Training Methodology and Robustness

Training data are generated from random radial manifolds parameterized via spherical harmonics, with each sampled to 10510^5 points and annotated with ground-truth normals, fundamental forms, and curvature. Neighborhoods are constructed using kk-NN (typically k=30k=30 or $50$) and rescaled to canonical coordinates. The composite loss for a patch Nϵ(x0)N_\epsilon(x_0) is:

L(Nϵ(x0);θ)=Lrel(h^(ui,vi),wi)+λ1Lnorm(η^i,ηi)+λ2Lrel(I^i1,Ii1)+λ3Lrel(I^i,Ii)+λ4Lrel(K^i,Ki)\mathcal{L}(N_\epsilon(x_0);\theta) = \mathcal{L}_{\mathrm{rel}}(\hat{h}(u_i,v_i),w_i) + \lambda_1\mathcal{L}_{\mathrm{norm}}(\hat{\eta}_i,\eta_i) + \lambda_2\mathcal{L}_{\mathrm{rel}}(\hat{I}_i^{-1},I_i^{-1}) + \lambda_3\mathcal{L}_{\mathrm{rel}}(\hat{I}_i,I_i) + \lambda_4\mathcal{L}_{\mathrm{rel}}(\hat{K}_i,K_i)

with hyperparameters λn=0.5\lambda_n=0.5. Gaussian noise and outliers are included to probe robustness, but excluded from loss computation. Optimization is performed via Adam for \sim200 epochs, resulting in GNPs that filter artifacts and maintain performance under substantial noise, outperforming classical PCA and finite-differences on corrupted surfaces (Quackenbush et al., 6 Mar 2025).

4. Geometric Feature Extraction and Downstream Numerical Workflows

Trained GNPs yield polynomial height functions h^(u,v)\hat{h}(u,v) for arbitrary local patches, from which differential geometric descriptors are calculated:

  • Local metric (first fundamental form)
  • Normals and principal directions
  • Second fundamental form
  • Gaussian and mean curvatures

GNPs provide modular feature pipelines for surface geometry analysis, meshless classification, physical simulation, and mesh generation. GNP robustness to outlier and noise contamination is critical for downstream datacenter or real-time applications (Quackenbush et al., 6 Mar 2025).

5. Geometric PDE Solvers: Mean-Curvature Flow and Laplace–Beltrami Collocation

GNPs are capable geometric PDE solvers. For mean-curvature flow, updates are performed by

xin+1=xin+ΔtH(xin)η(xin),x_i^{n+1}=x_i^n+\Delta t H(x_i^n)\eta(x_i^n),

where HH and η\eta are GNP-predicted at each step, stabilized by Gaussian smoothing. This generates accurate, singularity-free evolutions on test shapes.

For Laplace–Beltrami equation ΔLBu=f\Delta_{LB}u=-f, GNP-derived geometric descriptors are combined with Generalized Moving Least Squares (GMLS) to fit local polynomials and evaluate operator actions, assembling a global collocation least-squares problem solved by LGMRES+AMG. Mean relative error is O(102101)O(10^{-2}-10^{-1}) on test shapes (Quackenbush et al., 6 Mar 2025, Quackenbush et al., 2024).

6. Transferability, Software Integration, and Practical Impact

Pretrained GNPs are distributed as part of the geo_neural_op Python package, supplying open-source weights and an API for robust geometry estimation:

  • .estimate_metric(pts) yields gijg_{ij},
  • .estimate_curvature(pts) yields (H,K)(H,K),
  • .estimate_normals(pts) yields η\eta,
  • .solve_laplace_beltrami(pts,u,f) solves point cloud Laplace–Beltrami problems.

Integration is transparent for data pipelines targeting shape analysis, simulation, and geometric inference. No mesh construction or retraining is needed for new surfaces, making GNPs suitable for scalable, geometry-aware machine learning and scientific computing (Quackenbush et al., 6 Mar 2025, Quackenbush et al., 2024).

7. Connections to Broader Operator-Learning Landscape

GNPs generalize neural operator concepts to intrinsic manifold settings, contrasting with spectral or grid-dependent models (e.g., Geometric Laplace Neural Operator (Tang et al., 18 Dec 2025), GeoMaNO (Han et al., 17 May 2025), GINOT (Liu et al., 28 Apr 2025)). Unlike approaches such as Reference Neural Operators (Cheng et al., 2024), which adapt to local geometric deformations around a reference, GNPs directly learn latent functionals on arbitrary point clouds, enabling topology-free geometric reasoning. GNP methodology overlaps with conformal geometric algebra neural models (Hitzer, 2013) in encoding geometric transformations as algebraic operator actions, but GNPs operate via learned polynomial surface models and feature aggregation, rather than Clifford algebraic versors.

GNPs thus constitute a rigorous, transfer-ready foundation for geometric operator learning across meshless, non-Euclidean, and noisy domains. Their foundational status in geometric computation aligns with emerging lines in multiscale renormalization and transformer-integrated operator architectures (Gabriel et al., 21 Feb 2025, Liu et al., 28 Apr 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Geometric Neural Operators (GNPs).