Papers
Topics
Authors
Recent
Search
2000 character limit reached

Green’s-function Spherical Neural Operators

Updated 22 February 2026
  • Green’s-function Spherical Neural Operators are neural architectures that learn PDE solution operators on the sphere using parameterized Green’s functions in the spectral domain.
  • They fuse equivariant, invariant, and anisotropic kernel designs to capture both global symmetries and local heterogeneity in spherical data.
  • GSNOs achieve state-of-the-art performance across diverse applications, from climate modeling to molecular energy prediction, by leveraging explicit spectral transforms.

Green’s-function Spherical Neural Operators (GSNO) are a class of neural operator architectures that generalize the construction of solution operators for partial differential equations (PDEs) on the two-dimensional sphere. GSNOs formulate operator learning via integral representations associated with Green’s functions, fusing operator-theoretic insights with rigorous equivariant and invariant modeling in the spectral (harmonic) domain. They enable efficient and flexible neural surrogates for spherical physical, biological, and geometric systems that exhibit both global symmetries and local heterogeneity (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).

1. Green’s-Function Formulation on the Sphere

GSNOs are founded on the solution operator representation of (linear) PDEs on the unit sphere S2S^2, typically written as:

Dg(u)=f(u),uS2D\,g(u) = f(u), \qquad u \in S^2

for a chosen differential operator DD. The Green’s function G(u,u)G(u, u') is the fundamental solution of DD, satisfying DuG(u,u)=δ(u,u)D_u G(u, u') = \delta(u, u'). The associated solution operator is the integral

g(u)=S2G(u,u)f(u)dug(u) = \int_{S^2} G(u, u')\, f(u')\, du'

This formulation is central to both designable operator theory and data-driven neural operator learning (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).

In the GSNO framework, the Green’s kernel GG is not prescribed a priori but is parameterized, often directly in the spherical harmonic domain, and learned end-to-end from data. This approach allows recovery of classical convolutional (equivariant), invariant, and anisotropic cases by modulating the learnable kernel classes.

2. Parametric Families of Spherical Green’s Functions

GSNOs leverage the spectral properties of S2S^2 and construct Green’s functions with tunable equivariance and invariance properties. The principal canonical forms are:

  • Equivariant Kernel: GE(u,u)=GE(dist(u,u))G^E(u, u') = G^E(\operatorname{dist}(u, u'))—depends only on spherical distance, guaranteeing SO(3)-equivariance. In the harmonic domain, this reduces to diagonal scaling:

SHT[gE](,m)=GθE()SHT[f](,m)\mathrm{SHT}[g^E](\ell, m) = G^E_\theta(\ell)\, \mathrm{SHT}[f](\ell, m)

  • Invariant Kernel: GI(u,u)=GI(u)G^I(u, u') = G^I(u). The application yields an operator whose output is invariant to global rotations of ff,

SHT[gI](,m)=CfGθI(,m)\mathrm{SHT}[g^I](\ell, m) = C_f\, G^I_\theta(\ell, m)

where CfC_f is the spherical mean of ff.

  • Anisotropic Kernel: GA(u,u)=GA(ud)G^A(u, u') = G^A(u \cdot d), with dd a learned preferred direction. This enables axisymmetric but not globally equivariant modeling,

SHT[gA](,m)=SHT[f](,m)P(ndθ)\mathrm{SHT}[g^A](\ell, m) = \mathrm{SHT}[f](\ell, m) P_\ell(n \cdot d_\theta)

where PP_\ell are Legendre polynomials and dθd_\theta is a learnable parameter (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).

These three forms are fused in each GSNO network layer to permit adaptive representation of isotropy, global invariance (for nuisance elimination), and local anisotropy (for modeling preferred directions or boundaries).

3. Spectral Parameterization and Neural Operator Construction

GSNOs operate primarily in the spectral domain, parameterizing the learnable Green’s kernel in terms of spherical harmonic coefficients:

G(u,u)==0Lmaxm=αmYm(u)Ym(u)G(u, u') = \sum_{\ell=0}^{L_{\max}} \sum_{m=-\ell}^{\ell} \alpha_{\ell m} Y_{\ell m}(u) \overline{Y_{\ell m}(u')}

In the fused architecture, each GSNO layer executes:

  1. SHT: Transform f(u)f(u) to harmonic coefficients SHT[f](,m)\mathrm{SHT}[f](\ell, m).
  2. Coefficient Computation:
    • Equivariant: G1()G_1(\ell) scaling.
    • Invariant: G2(,m)G_2(\ell, m) modulated by input mean.
    • Anisotropic: Multiplicative Legendre modulation per degree.
  3. Inverse SHT: Recover processed fields gE(u),gI(u),gA(u)g^E(u), g^I(u), g^A(u).
  4. Fusion: Concatenate spectral outputs and process through a pointwise linear layer and nonlinearity.

Parameter learning is performed for G1()G_1(\ell), G2(,m)G_2(\ell, m), and dθd_\theta (for anisotropy) per layer. The spectral learning method involves backpropagation through SHT/ISHT, with mean-relative-error or grid-weighted objective (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).

4. Multi-Scale Architectures and Implementation

Advanced GSNO implementations, such as GSHNet, organize blocks within an encoder–decoder U-Net topology supporting multi-scale feature propagation:

  • Downsampling blocks: SHT, spectral GSNO, ISHT, spatial grid reduction, channel doubling.
  • Upsampling blocks: Symmetrically invert downsampling.
  • Skip connections: Both long (input-output) and residual (across scales).

Each GSNO block handles spectral transforms, applies the Green’s-function operator (with fusion of equivariant/invariant/aniso terms), and leverages channel mixing (via 1×11 \times 1 convolutions and GELU activation). The arrangement maintains spectral diagonal efficiency and minimizes distortion, as all transforms respect spherical geometry (Tang et al., 11 Dec 2025).

Spectral truncation at max1520\ell_{\max} \sim 15\text{--}20 commonly yields sufficient resolution (e.g., for 256×256256\times256 grids). Libraries implementing Driscoll–Healy sampling or latitude–longitude quadratures are used for harmonic transforms (Tang et al., 7 Jan 2026).

5. Training Regimes, Losses, and Regularization

Training employs Adam optimizers (typical learning rate 2×1032\times 10^{-3}), batch sizes 4–32, and moderate epochs (\sim50–150 per task). Losses include:

  • Mean relative error (MRE), grid-weighted by spherical quadrature measures,
  • Spectral weight decay (on high \ell) for stability and regularization,
  • Dropout in channel-mixing layers to control overfitting (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).

Ablation experiments demonstrate that both equivariant and correction terms are essential for state-of-the-art performance; removing either degrades metrics (e.g., ACC drops from 71%71\% to 20%20\% for global weather prediction if the equivariant term is omitted).

6. Empirical Performance and Scientific Applications

GSNOs and their hierarchically stacked variants demonstrate consistent improvements over baseline models—including s2cnn (SO(3) CNNs), SFNO (Spectral Fourier Neural Operators), and MLPs—across a spectrum of tasks:

Task GSNO Metric Closest Baseline Baseline Metric
Spherical MNIST acc. (%) (256 ch) 99.3–99.7 SFNO (98.9–99.1) s2cnn (95.1–95.8)
Spherical Shallow Water Equation (MRE×10⁻³, 15 h) 0.67–0.78 SFNO (0.86–1.04) FNO (1.63–4.47)
Diffusion MRI FOD (WM ACC) 0.9176 ESCNN (0.9006) FOD-Net (0.8858)
Cortical Parcellation (Mindboggle ACC, c128) 90.42 SPHARM-Net (89.88) Spherical U-Net (89.42)
Molecular Energy Prediction (QM7 RMSE) 3.51 s2cnn (8.47) MLP (16.06)

GSNO shows particular strengths in modeling heterogeneous spherical systems, including planetary climate fields, diffusion MRI fiber distributions, cortical surface parcellation, and quantum chemistry on atomistic shells (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025).

7. Theoretical Connections and Explicit Operator Structure

GSNOs are operator-theoretically underpinned by the explicit low-rank kernel representation:

Kθ(ω,ω)=i,j=1NYi(ω)Aij(θ)Yj(ω)K_\theta(\omega, \omega') = \sum_{i,j=1}^{N} Y_i(\omega)\,A_{ij}(\theta)\,Y_j(\omega')

where YjY_j are spherical harmonics, AA is learned by a system-network (often a spherical UNet or graph CNN), and dif=S2Yi(Ω)f(Ω)dΩd^f_i = \int_{S^2} Y_i(\Omega) f(\Omega)\, d\Omega are input-averaged coefficients. This formulation enables explicit Green’s-kernel evaluation, tractable preconditioning of PDE solvers (via SVD/Cholesky of the AA matrix), and fine-scale quadrature refinement without network retraining (Melchers et al., 2024).

A plausible implication is that GSNOs bridge the gap between purely analytical operator theory and the expressivity/flexibility of deep neural surrogates by working directly in the harmonic domain, enabling efficient adaptation to anisotropic and biologically/physically complex spherical data (Tang et al., 11 Dec 2025).

GSNOs fundamentally differ from "Neural FMM" and local hierarchical neural operators in several aspects:

  • GSNOs use explicit harmonic expansions and maintain closed-form spectral-domain kernel representations, whereas Neural FMM architectures (Fognini et al., 24 Sep 2025) avoid any analytic or spectral basis, instead learning all translation/aggregation rules via latent MLPs organized along an FMM-style tree.
  • GSNOs can realize strict SO(3) equivariance, invariant averaging, and learnable anisotropic preferences in a single fused operator, while tree-structured architectures encode spatial hierarchies in lattice or octree partitions without sphere-native spectral structure.

These distinctions underscore the theoretical and practical advantages of GSNOs for applications demanding explicit control over symmetry, invariance, and computational efficiency on the sphere.


In summary, Green’s-function Spherical Neural Operators provide a unified, efficient, and highly flexible spectral approach to learning PDE solution operators and related mappings on spherical domains. By parameterizing, learning, and fusing Green’s kernels of varying symmetry classes in the spherical harmonic domain, they achieve state-of-the-art results across a wide range of scientific, medical, and physical modeling tasks (Tang et al., 7 Jan 2026, Tang et al., 11 Dec 2025, Melchers et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Green’s-function Spherical Neural Operators (GSNO).