JacobiConv: Spectral GNN with Jacobi Polynomials
- JacobiConv is a spectral GNN architecture that uses orthogonal Jacobi polynomial bases to achieve universal expressive power and fast convergence.
- It adapts the filter representation to the empirical graph Laplacian spectrum via parameters α and β, improving Hessian conditioning and optimization stability.
- The framework also enables fast Chebyshev–Jacobi transforms through stabilized recurrences and asymptotic formulas, ensuring numerical efficiency across diverse datasets.
JacobiConv is a spectral graph neural network (GNN) architecture that leverages Jacobi polynomial bases to parameterize graph filters. Originally proposed in the context of analyzing the expressive power and optimization landscape of spectral GNNs, JacobiConv abandons pointwise nonlinearities and instead focuses on orthogonal polynomial parameterizations, yielding models with provably universal expressive power under mild conditions and superior empirical performance on both synthetic and real-world datasets (Wang et al., 2022). The Jacobi polynomial basis is chosen for its ability to be adapted to the empirical distribution of graph Laplacian eigenvalues via parameters α and β, enabling accelerated convergence through improved Hessian conditioning in training. Separately, JacobiConv also refers to fast transforms between Chebyshev and Jacobi polynomials, realized through Hahn’s asymptotic formula and stabilized recurrences for numeric stability across the spectrum (Slevinsky, 2016).
1. Spectral GNNs and the Motivation for JacobiConv
In a spectral GNN framework, each node feature matrix is interpreted as a graph signal to be filtered in the eigenspace of the graph’s normalized Laplacian . The action of a filter on is formulated as , where is the eigenvector matrix and the diagonal matrix of eigenvalues (). To circumvent runtime eigendecomposition, is typically chosen polynomial in —classically —and polynomial bases such as Chebyshev or Bernstein have been standard.
The choice of polynomial basis affects optimization: if is represented in a basis orthogonal with respect to the empirical spectral density of the graph signal, the Hessian in squared loss for filter coefficients is (nearly) diagonal, leading to better-conditioned gradients and faster convergence. Jacobi polynomials, parameterized by , provide a flexible family of orthogonal polynomials adjustable to the distribution of Laplacian eigenvalues encountered in real graphs (Wang et al., 2022).
2. Jacobi Polynomial Basis: Definitions and Properties
For , Jacobi polynomials are defined on via the three-term recurrence
with explicit depending on (see He et al. 2021; also (Slevinsky, 2016)). Jacobi polynomials are orthogonal with respect to the weight : In spectral GNNs, the normalized Laplacian spectrum is shifted to via , and the Jacobi polynomial basis is used as the functional basis for .
3. JacobiConv Architecture and Spectral Filter Parameterization
JacobiConv parameterizes the spectral filter as a th-order Jacobi polynomial expansion: where the are learned coefficients, and each output channel may use its own . In the forward computation, the input is linearly projected by to , followed by spectral filtering to produce .
Filter learning is cast as minimizing Frobenius loss with weight decay on both and the filter coefficients. Optimization is performed via Adam with grid/random search for learning rates and polynomial order .
To further stabilize coefficient learning, JacobiConv employs Polynomial-Coefficient-Decomposition (PCD), expressing as bounded linear combinations with bounded by nonlinearity.
4. Orthogonality, Hessian Conditioning, and Adaptivity
Near a loss-minimizing solution, the Hessian with respect to the filter coefficients is determined by the inner products , where is the empirical spectral density weighted by (Fourier coefficients of ). Orthogonalizing the basis for this minimizes the Hessian's condition number. Jacobi polynomials' orthogonality to provides the flexibility to fit the observed empirical density by grid-searching over .
This adaptivity is not present in fixed bases such as Chebyshev or Bernstein, allowing JacobiConv to maintain fast convergence and stable optimization over diverse spectral densities encountered in practice (Wang et al., 2022).
5. Expressive Power and Universality without Nonlinearities
Theorem 4.1 of (Wang et al., 2022) shows that a purely linear spectral GNN of the form can realize any mapping provided:
- has distinct eigenvalues (no repeated eigenvalues).
- has no missing frequency components (nonzero in every eigen-direction).
Consequently, adding nonlinearities does not increase expressive power for general graphs with suitable features. Nonlinearities can assist in degenerate edge cases (repeated eigenvalues, missing frequencies), but these are empirically rare ( repeated eigenvalues). Universality extends to multi-output nodes by assigning each output channel an independent . Further, polynomial filter GNNs have at most the discriminative power of -step 1-Weisfeiler–Leman (1-WL), which in the absence of eigenvalue multiplicity and frequency gaps already achieves full node distinction.
6. Empirical Results and Benchmark Comparisons
JacobiConv's empirical validation covers both synthetic filtering tasks and real-world graph classification:
| Task/Domain | Baselines | JacobiConv outcome |
|---|---|---|
| Synthetic filtering ("image-on-graph") | GPRGNN, ARMA, ChebyNet, BernNet, monomial/Chebyshev/Bernstein/Jacobi linear GNNs | Up to 50× lower MSE (e.g., for Jacobi vs. $1.8$ for ARMA on low-pass); outperforms all linear baselines (10× lower than monomial, Bernstein) |
| Real-world node classification (10 datasets) | GCN, APPNP, ChebyNet, GPRGNN, BernNet (with nonlinearities) | Wins 9/10 datasets; up to +12% accuracy gain (e.g., Squirrel); 2–3 points better than BernNet on average, using as many parameters |
This demonstrates that JacobiConv delivers universal spectral filtering capability and state-of-the-art empirical performance using only linear operations and no activations (Wang et al., 2022).
7. Fast Chebyshev–Jacobi Transforms and Numerical Implementation
In the context of polynomial basis transforms, JacobiConv also refers to the fast numerically stable computation of Chebyshev–Jacobi transforms (Slevinsky, 2016). This is realized using:
- Hahn’s interior asymptotic formula for with rigorous error bounds, allowing reduction to a sum of diagonally scaled DCT-I and DST-I transforms for "asymptotic blocks".
- Stable three-term recurrence and the Clenshaw–Smith algorithm, with Reinsch's endpoint modifications to maintain uniform accuracy even near .
- Complexity for the full transform, using for double precision and careful block partitioning of the computation domain.
Key implementation details include:
- Pre-planning DCTs/DSTs via FFTW or equivalent libraries.
- Explicitly handling edge cases and parameter regimes for , including parameter shifting for half-integer cases.
- Clenshaw–Curtis quadrature and accurate endpoint handling (Slevinsky, 2016).
This fast transform is essential for efficient evaluation and inversion of Jacobi polynomial expansions in GNN filtering and beyond.
In conclusion, JacobiConv unifies advances in spectral graph filtering, polynomial basis adaptivity, and fast, stable transforms to provide a highly expressive, optimizable, and empirically dominant spectral GNN framework without the need for nonlinearities or large overparameterization (Wang et al., 2022, Slevinsky, 2016).