GraphArm: Rational Graph Filters & Neural Models
- GraphArm is a framework for rational graph filtering that combines autoregressive and moving average components to capture long-range, frequency-selective dependencies in graph signals.
- It enables efficient distributed implementation with localized recursions, ensuring stability and transferability across both static and dynamic graph structures.
- By integrating adaptive neural architectures, GraphArm enhances performance in node classification, signal denoising, and other graph tasks while maintaining low spectral-response error.
A Graph Autoregressive Moving Average (GraphARMA or, editor's term, GraphArm) filter is a family of rational-difference graph filters and graph neural architectures that generalize classical ARMA filtering to signals defined on graphs. GraphArm incorporates both autoregressive (AR) and moving-average (MA) components to model long-range, frequency-selective, and temporally evolving dependencies among graph-structured data. This methodology spans distributed signal processing on static and dynamic graphs (Isufi et al., 2016), graph neural network design (Bianchi et al., 2019), and adaptive attention-driven state space models (Eliasof et al., 22 Jan 2025). GraphArm bridges rational graph spectral filtering, distributed recursion, and expressive neural parameterization, with stability and transferability properties that make it central in modern graph learning.
1. Mathematical Definition and Spectral Formulation
The GraphARMA filter of orders is defined in analogy to classical ARMA filters, as a rational function on the spectrum of a graph Laplacian . The general form is
where denotes an eigenvalue of , and (numerator) and (denominator, often ) are, respectively, MA and AR coefficients (Isufi et al., 2016, Bianchi et al., 2019, Eliasof et al., 22 Jan 2025).
Applying in the graph-Fourier domain, for a signal (with ): Polynomial choices (all ) recover finite impulse response (FIR) or Chebyshev/GNN convolutional filters; general yield rational spectral responses, enabling sharper or nonlow-pass filtering.
GraphArm traditionally sets the filter coefficients independently of any particular graph, so that is valid for all graphs with Laplacian spectrum in , supporting robustness and transferability (Isufi et al., 2016, Bianchi et al., 2019).
2. Distributed Implementation and Coefficient Design
GraphArm admits efficient vertex-domain realization as distributed, local recursions. Core update types include:
- ARMA recursion ("potential-kernel" block):
for scalar coefficients . The steady-state frequency response becomes for , .
- Parallel-ARMA: independent ARMA recursions in parallel, summed:
leading to .
- Periodic-ARMA: One state with -periodic coefficients, yielding higher-order rational responses.
Design of AR/MA coefficients often follows a rational approximation of a target over (e.g., Shanks or Padé-type fits), permitting graph-independent universality. Steps include polynomial fitting, matching, and partial fraction decomposition (Isufi et al., 2016). Such realization guarantees that recursions only require local neighbor exchanges per iteration, enabling distributed filtering on large-scale and dynamic graphs.
3. Graph Neural Network Structures and Expressivity
In graph neural architectures, ARMA filters serve as the building block for expressive and robust message passing:
- In (Bianchi et al., 2019), the ARMA GNN layer is implemented via parallel "Graph Convolutional Skip" (GCS) stacks, each iteratively computing
where are node features, is a rescaled adjacency, is an AR weight (shared), an MA weight, and is typically ReLU. After recursions per stack, outputs are averaged.
- (Eliasof et al., 22 Jan 2025) constructs GRAMA, which extends ARMA recurrences to adapt via selective attention. For each recurrence, the AR and MA coefficients and are computed dynamically using multi-head attention over pooled feature and residual sequences, conferring adaptability and long-range propagation.
Theoretically, every ARMA(, ) is equivalent to a linear state-space model (SSM) and vice versa, which enables powerful connections to recent state-space graph models and analysis of stability and propagation range (Eliasof et al., 22 Jan 2025).
4. Exact Solutions for Denoising, Interpolation, and Temporal Extensions
GraphArm admits closed-form (graph spectral) or efficiently approximated solutions for classical signal processing tasks:
- Tikhonov Denoising: The solution to is ; the frequency response is exactly ARMA of order .
- Wiener Denoising: For Laplacian-diagonal noise and signal covariances, the optimal filter is ARMA whenever the covariances are rational in .
- Interpolation: Given observations on a node subset, the regularized least-squares solution (with mask ) reduces to ARMA for .
- Dynamic (Graph × Time) Filters: The basic ARMA recursion, with a time-varying signal , yields a two-dimensional transfer function
where indexes the temporal frequency. This accommodates joint graph-temporal filtering, with stability dictated by and the graph spectral radius.
- Time-Varying Graphs: Stability and exponential convergence carry over if varies in time with norm bounded by ; error bounds depend on rate of change.
5. Empirical and Theoretical Evaluation
GraphArm and its neural extensions have demonstrated robustness and superior expressivity across a range of benchmarks:
- Convergence: All ARMA recursions achieve exponential convergence to steady-state, typically in iterations (Isufi et al., 2016).
- Approximation: ARMA filters (with –$4$, depth –$3$) provide sharp spectral selectivity and can closely match low- or band-pass targets while avoiding the approximation artifacts common to high-degree polynomial (FIR) filters (Bianchi et al., 2019).
- Robustness: ARMA filters maintain low spectral-response error under edge failures and topology perturbations, outperforming polynomial counterparts due to their graph-independent, IIR structure (Isufi et al., 2016, Bianchi et al., 2019).
- Downstream Performance: On node classification, graph-signal labeling, graph classification, and regression tasks, ARMA-GNNs yield statistically superior or on-par results with respect to GCN, Chebyshev, Cayley, and attention-based GNNs. Typical gains appear more pronounced on tasks with larger graphs or long-range dependencies:
| Method | Cora Acc. | PPI Acc. | MUTAG | Proteins | QM9 ( MSE) | |-----------|-----------|----------|-------|----------|----------------| | GCN | 81.5% | 80.8% | 85.7 | 71.0 | 0.445 | | Chebyshev | 79.5% | 86.4% | 82.6 | 72.1 | 0.433 | | CayleyNet | 81.2% | 84.9% | 87.8 | 65.6 | 0.442 | | ARMA | 83.4% | 90.5%|91.5|73.7 | 0.394 |
- GRAMA (Adaptive ARMA-GNN): On 14 synthetic and real-world benchmarks, GRAMA consistently improves over its backbone models and matches or outperforms SOTA on long-range tasks and challenging heterophilic classification, due to its adaptive, attention-driven ARMA coefficient selection (Eliasof et al., 22 Jan 2025).
6. Stability, Transferability, and Practical Guidance
GraphArm architectures possess strong theoretical stability conditions. For the basic ARMA recursion, stability requires , for parallel and periodic ARMA, suitable bounds on parameters. Equivalently, for a state-space realization of order , the AR characteristic polynomial's roots must lie inside the unit disk, and sufficient stability is provided if (Eliasof et al., 22 Jan 2025).
Transferability is ensured by the graph-independent design of the coefficients and the local, sparse recursion. The mapping generalizes to new graphs, as small changes in topology incur only minor changes in filter response (Bianchi et al., 2019, Isufi et al., 2016).
For implementation (Bianchi et al., 2019):
- –$4$ branches provide adequate expressivity.
- Depth –$3$ per branch suffices in small-world graphs.
- regularization and weight sharing across iterations help maintain stability.
- Dropout on the skip connections promotes filter diversity.
- The principal computational cost is per ARMA layer, comparable with Chebyshev polynomial filters.
7. Theoretical Extensions and Connections
GraphArm is theoretically equivalent to discrete linear state-space models (SSMs), allowing the graphical extension of classical systems theory results. Any ARMA(, ) recursion can be mapped to an SSM, and vice versa; depth and expressivity are governed by the roots of the AR polynomial. The design accommodates attention-based, adaptive coefficient selection, as in selective SSMs (Eliasof et al., 22 Jan 2025).
Furthermore, the temporal extension of ARMA filtering as developed in (Isufi et al., 2016) provides a formalism for spatio-temporal separation, selective temporal mode attenuation, and universality of the rational filter class across evolving graphs.
GraphARMA (GraphArm) represents the intersection of rational graph spectral filtering, iterative distributed algorithms, and adaptive neural sequence modeling. Its universality, stability, and robustness to graph perturbations position it as an essential mechanism for scalable, accurate, and transferable graph signal processing and deep learning. (Isufi et al., 2016, Bianchi et al., 2019, Eliasof et al., 22 Jan 2025)