Spectral Beltrami Network (SBN)
- Spectral Beltrami Network (SBN) is a neural architecture that unifies spectral theory and deep learning to perform differentiable, bijective quasiconformal mappings.
- It employs truncated spectral expansions of Beltrami coefficients using spherical harmonics and Laplacian eigenfunctions to reduce dimensionality and control smoothness.
- The architecture integrates multiscale GNN layers with a U-Net framework, achieving state-of-the-art performance in geometry processing, medical imaging, and fluid dynamics simulations.
The Spectral Beltrami Network (SBN) is a neural architecture designed to provide a fully differentiable, spectrally informed solution to quasiconformal mapping and diffeomorphic surface registration problems, with rigorous control over conformal distortion and bijectivity. SBN unifies spectral theory, mesh-based geometry processing, and deep learning surrogates for variational partial differential equation (PDE) solvers, offering state-of-the-art performance for complex registration and parameterization tasks on planar, spherical, and cylindrical domains (Xu et al., 2 Feb 2026, Xu et al., 12 Nov 2025, Fré, 9 Dec 2025).
1. Mathematical Foundations: Beltrami Coefficient and Quasiconformal Mapping
SBN rests on the Beltrami equation, a fundamental tool for diffeomorphic mappings in complex geometry: where is the Beltrami coefficient—a complex-valued field measuring local conformal distortion. Surface parameterizations and self-maps constrained by can control angular distortion and local-scale change while enforcing bijectivity.
The Least-Squares Quasiconformal (LSQC) energy formalizes this into a variational problem: with , an explicit function of , and the standard symplectic matrix. The minimizer subject to pinning two points yields a unique, similarity-invariant mapping provided (Xu et al., 12 Nov 2025).
On genus-0 (spherical) surfaces, a canonical two-chart stereographic atlas is used: the "north" chart and "south" chart , each with its own Beltrami field , related by a transformation on their overlap. The Measurable Riemann Mapping Theorem ensures that every such pair corresponds to a unique homeomorphism up to Möbius post-composition (Xu et al., 2 Feb 2026).
2. Spectral Parametrization of Beltrami Fields
SBN encodes the Beltrami coefficient using a truncated spectral expansion. On smooth spheres or disks, the Beltrami field is expanded in spherical harmonics or Laplace–Beltrami mesh eigenfunctions. Explicitly, for the sphere: where are spherical harmonics. For planar or surface meshes, the first nonzero cotangent Laplacian eigenfunctions provide the basis. For cylindrical or tubular domains appearing in fluid dynamics, SBN leverages complete orthogonal sets of vector-valued eigenmodes (e.g., Bessel/trigonometric functions for axisymmetric flows), decomposing into Beltrami, anti-Beltrami, and closed harmonic forms (Fré, 9 Dec 2025).
Truncating the spectral expansion reduces parameter dimensionality, allows for smoothness control, and enables efficient neural optimization in the latent coefficient space.
3. SBN Neural Architecture and Differentiable Surrogate
The core SBN architecture builds a neural surrogate for the LSQC variational solution:
- Input: Discrete Beltrami field at mesh vertices or spectral coefficients , and two "pinned" points to fix conformal freedom.
- Spectral layers: Project features into mesh eigenbases , with spectral mixing by trainable tensors . Each spectral band (low, medium, high frequency) uses separate filters.
- Multiscale message passing: Hierarchical Graph Neural Network (GNN) layers operating over multiple mesh resolutions, combined with pooling/unpooling, propagate local and global geometric context.
- U-Net hierarchy: Deep hierarchical architecture alternates spectral and message-passing modules across resolutions.
- Output: Approximates the planar (or spherical/stereographic) quasiconformal map satisfying the discrete LSQC equation and pinning constraints.
The network is trained end-to-end by minimizing the LSQC loss, vertex/edge/region alignment terms, and regularization losses (angle distortion, smoothness, and seam consistency for two-chart representations) (Xu et al., 2 Feb 2026, Xu et al., 12 Nov 2025). The LSQC loss implements a differentiable, locally-resolved surrogate to the classical solver, suitable for back-propagation in learning or optimization contexts.
4. Optimization Frameworks: BOOST and SBN-Opt
SBN enables plug-in optimization frameworks for task-driven diffeomorphic mapping via gradient descent over Beltrami parameters:
- BOOST (Beltrami Optimization On Stereographic Two-charts) targets genus-0 spherical parameterization by optimizing two Beltrami fields on overlapping hemispheres. Constraints include seam consistency, folding penalties (Jacobian positivity), and global coordinate alignment. Loss terms aggregate landmark alignment, intensity (feature) alignment, and distortion regularization (Xu et al., 2 Feb 2026).
- SBN-Opt provides a general scheme for free-boundary diffeomorphism problems with unconstrained domain boundaries. It iteratively adjusts the Beltrami field, pinning locations, and post-similarity transformation to minimize task-specific objectives (e.g., density-equalizing measures, region correspondence, geometric feature alignment) with explicit angle distortion and smoothness regularization (Xu et al., 12 Nov 2025).
Both frameworks use the pretrained SBN as a frozen differentiable forward map; parameters are optimized via Adam over 200–500 iterations. Bijectivity constraint is softly enforced by parameterizing or introducing a log–barrier on .
5. Domain-Specific Extensions: Fluid Dynamics and Spectral PINNs
For axisymmetric fluid-dynamic problems, SBN can be instantiated as a spectral PINN (Physics-Informed Neural Network), with spectral coefficients as its primary variables (Fré, 9 Dec 2025). The domain is discretized with respect to cylindrical harmonics (Bessel/trigonometric modes), and the time-dependent Navier–Stokes system is projected mode-by-mode, yielding a coupled quadratic ODE hierarchy. The neural architecture outputs time-dependent coefficient trajectories, while the physics layer computes residuals of the PDE in the spectral basis. Training loss aggregates PDE residuals, initial/boundary conditions, and spectral regularization.
Pre-computation of mode coupling tensors (structure constants), exact inner products, and eigenvalues offers efficiency gains as nonlinearity evaluation is confined to spectral space. This paradigm drastically reduces computational complexity for highly symmetric or spectrally sparse flows, extending naturally to general bounded domains through basis adaptation.
6. Empirical Results and Performance Analysis
SBN and its frameworks yield state-of-the-art empirical outcomes across geometry processing, medical imaging, and graphics:
| Task | Metric | SBN / BOOST Value | Baseline Value |
|---|---|---|---|
| Landmark registration (cortex) | Chamfer error | 0.0005–0.004 | 0.0017–0.0083 (FLASH) |
| Angle distortion | |||
| Sulcal depth (Pearson's ) | Correlation | — | |
| Parcellation (Dice) | Similarity | — | |
| Density-equalizing param. (variance) | Variance reduction | — | |
| Inconsistent reg. (avg ) | Reduction | to | — |
| Bijectivity | Jacobian foldings | 0 | — |
All mappings maintain bijectivity (zero foldings), biologically plausible surface deformations, and concentrated distortion near features of interest. In ablation studies, removal of spectral layers or message-passing networks doubles error, confirming the synergistic roles of local and global operators (Xu et al., 12 Nov 2025, Xu et al., 2 Feb 2026).
7. Implementation Practices, Hyperparameters, and Limitations
Best practices identified for SBN training and deployment:
- Mesh preparation: Uniform triangulations (e.g., via DistMesh), avoidance of degenerate faces, and area normalization.
- Spectral computation: Efficient eigenpair computation via Lanczos/ARPACK, sparse storage of spectral transforms, and caching repeated operations.
- Hyperparameters: Typical –$200$ eigenmodes, latent dimensions , three resolution levels with progressively more message-passing iterations, batch training (size 8), and learning rates (e.g., ).
- Scaling: Spectral truncation scales as for large meshes. GPU batching is used for efficiency.
Limitations include reliance on high-quality mesh discretizations for spectral accuracy, sensitivity to pinning point features (15% error increase if omitted), and the need for careful regularization in high-frequency bands to avoid spectral blocking.
8. Significance and Outlook
SBN establishes a bridge between rigorous geometric analysis (Beltrami theory, spectral PDEs) and gradient-based learning frameworks. By rendering quasiconformal theory differentiable, it opens new avenues for task-driven, distortion-controlled surface mapping with explicit bijection guarantees. The spectral PINN variant further extends SBN to time-dependent PDEs in physics, especially for symmetric flows where modal sparsity can be exploited.
A plausible implication is the broader adoption of SBN-like architectures in geometric deep learning, scientific computing, and medical image analysis, where diffeomorphic, low-distortion mappings are essential (Xu et al., 2 Feb 2026, Fré, 9 Dec 2025, Xu et al., 12 Nov 2025).