Green’s-function Spherical Neural Operators
- Green’s-function Spherical Neural Operators are neural architectures that learn PDE solution operators on the sphere using parameterized Green’s functions in the spectral domain.
- They fuse equivariant, invariant, and anisotropic kernel designs to capture both global symmetries and local heterogeneity in spherical data.
- GSNOs achieve state-of-the-art performance across diverse applications, from climate modeling to molecular energy prediction, by leveraging explicit spectral transforms.
Green’s-function Spherical Neural Operators (GSNO) are a class of neural operator architectures that generalize the construction of solution operators for partial differential equations (PDEs) on the two-dimensional sphere. GSNOs formulate operator learning via integral representations associated with Green’s functions, fusing operator-theoretic insights with rigorous equivariant and invariant modeling in the spectral (harmonic) domain. They enable efficient and flexible neural surrogates for spherical physical, biological, and geometric systems that exhibit both global symmetries and local heterogeneity (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).
1. Green’s-Function Formulation on the Sphere
GSNOs are founded on the solution operator representation of (linear) PDEs on the unit sphere , typically written as:
for a chosen differential operator . The Green’s function is the fundamental solution of , satisfying . The associated solution operator is the integral
This formulation is central to both designable operator theory and data-driven neural operator learning (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).
In the GSNO framework, the Green’s kernel is not prescribed a priori but is parameterized, often directly in the spherical harmonic domain, and learned end-to-end from data. This approach allows recovery of classical convolutional (equivariant), invariant, and anisotropic cases by modulating the learnable kernel classes.
2. Parametric Families of Spherical Green’s Functions
GSNOs leverage the spectral properties of and construct Green’s functions with tunable equivariance and invariance properties. The principal canonical forms are:
- Equivariant Kernel: —depends only on spherical distance, guaranteeing SO(3)-equivariance. In the harmonic domain, this reduces to diagonal scaling:
- Invariant Kernel: . The application yields an operator whose output is invariant to global rotations of ,
where is the spherical mean of .
- Anisotropic Kernel: , with a learned preferred direction. This enables axisymmetric but not globally equivariant modeling,
where are Legendre polynomials and is a learnable parameter (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).
These three forms are fused in each GSNO network layer to permit adaptive representation of isotropy, global invariance (for nuisance elimination), and local anisotropy (for modeling preferred directions or boundaries).
3. Spectral Parameterization and Neural Operator Construction
GSNOs operate primarily in the spectral domain, parameterizing the learnable Green’s kernel in terms of spherical harmonic coefficients:
In the fused architecture, each GSNO layer executes:
- SHT: Transform to harmonic coefficients .
- Coefficient Computation:
- Equivariant: scaling.
- Invariant: modulated by input mean.
- Anisotropic: Multiplicative Legendre modulation per degree.
- Inverse SHT: Recover processed fields .
- Fusion: Concatenate spectral outputs and process through a pointwise linear layer and nonlinearity.
Parameter learning is performed for , , and (for anisotropy) per layer. The spectral learning method involves backpropagation through SHT/ISHT, with mean-relative-error or grid-weighted objective (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).
4. Multi-Scale Architectures and Implementation
Advanced GSNO implementations, such as GSHNet, organize blocks within an encoder–decoder U-Net topology supporting multi-scale feature propagation:
- Downsampling blocks: SHT, spectral GSNO, ISHT, spatial grid reduction, channel doubling.
- Upsampling blocks: Symmetrically invert downsampling.
- Skip connections: Both long (input-output) and residual (across scales).
Each GSNO block handles spectral transforms, applies the Green’s-function operator (with fusion of equivariant/invariant/aniso terms), and leverages channel mixing (via convolutions and GELU activation). The arrangement maintains spectral diagonal efficiency and minimizes distortion, as all transforms respect spherical geometry (Tang et al., 11 Dec 2025).
Spectral truncation at commonly yields sufficient resolution (e.g., for grids). Libraries implementing Driscoll–Healy sampling or latitude–longitude quadratures are used for harmonic transforms (Tang et al., 7 Jan 2026).
5. Training Regimes, Losses, and Regularization
Training employs Adam optimizers (typical learning rate ), batch sizes 4–32, and moderate epochs (50–150 per task). Losses include:
- Mean relative error (MRE), grid-weighted by spherical quadrature measures,
- Spectral weight decay (on high ) for stability and regularization,
- Dropout in channel-mixing layers to control overfitting (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).
Ablation experiments demonstrate that both equivariant and correction terms are essential for state-of-the-art performance; removing either degrades metrics (e.g., ACC drops from to for global weather prediction if the equivariant term is omitted).
6. Empirical Performance and Scientific Applications
GSNOs and their hierarchically stacked variants demonstrate consistent improvements over baseline models—including s2cnn (SO(3) CNNs), SFNO (Spectral Fourier Neural Operators), and MLPs—across a spectrum of tasks:
| Task | GSNO Metric | Closest Baseline | Baseline Metric |
|---|---|---|---|
| Spherical MNIST acc. (%) (256 ch) | 99.3–99.7 | SFNO (98.9–99.1) | s2cnn (95.1–95.8) |
| Spherical Shallow Water Equation (MRE×10⁻³, 15 h) | 0.67–0.78 | SFNO (0.86–1.04) | FNO (1.63–4.47) |
| Diffusion MRI FOD (WM ACC) | 0.9176 | ESCNN (0.9006) | FOD-Net (0.8858) |
| Cortical Parcellation (Mindboggle ACC, c128) | 90.42 | SPHARM-Net (89.88) | Spherical U-Net (89.42) |
| Molecular Energy Prediction (QM7 RMSE) | 3.51 | s2cnn (8.47) | MLP (16.06) |
GSNO shows particular strengths in modeling heterogeneous spherical systems, including planetary climate fields, diffusion MRI fiber distributions, cortical surface parcellation, and quantum chemistry on atomistic shells (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).
7. Theoretical Connections and Explicit Operator Structure
GSNOs are operator-theoretically underpinned by the explicit low-rank kernel representation:
where are spherical harmonics, is learned by a system-network (often a spherical UNet or graph CNN), and are input-averaged coefficients. This formulation enables explicit Green’s-kernel evaluation, tractable preconditioning of PDE solvers (via SVD/Cholesky of the matrix), and fine-scale quadrature refinement without network retraining (Melchers et al., 2024).
A plausible implication is that GSNOs bridge the gap between purely analytical operator theory and the expressivity/flexibility of deep neural surrogates by working directly in the harmonic domain, enabling efficient adaptation to anisotropic and biologically/physically complex spherical data (Tang et al., 11 Dec 2025).
8. Distinctions from Related Architectures
GSNOs fundamentally differ from "Neural FMM" and local hierarchical neural operators in several aspects:
- GSNOs use explicit harmonic expansions and maintain closed-form spectral-domain kernel representations, whereas Neural FMM architectures (Fognini et al., 24 Sep 2025) avoid any analytic or spectral basis, instead learning all translation/aggregation rules via latent MLPs organized along an FMM-style tree.
- GSNOs can realize strict SO(3) equivariance, invariant averaging, and learnable anisotropic preferences in a single fused operator, while tree-structured architectures encode spatial hierarchies in lattice or octree partitions without sphere-native spectral structure.
These distinctions underscore the theoretical and practical advantages of GSNOs for applications demanding explicit control over symmetry, invariance, and computational efficiency on the sphere.
In summary, Green’s-function Spherical Neural Operators provide a unified, efficient, and highly flexible spectral approach to learning PDE solution operators and related mappings on spherical domains. By parameterizing, learning, and fusing Green’s kernels of varying symmetry classes in the spherical harmonic domain, they achieve state-of-the-art results across a wide range of scientific, medical, and physical modeling tasks (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).