Papers
Topics
Authors
Recent
Search
2000 character limit reached

GraphArm: Rational Graph Filters & Neural Models

Updated 8 January 2026
  • GraphArm is a framework for rational graph filtering that combines autoregressive and moving average components to capture long-range, frequency-selective dependencies in graph signals.
  • It enables efficient distributed implementation with localized recursions, ensuring stability and transferability across both static and dynamic graph structures.
  • By integrating adaptive neural architectures, GraphArm enhances performance in node classification, signal denoising, and other graph tasks while maintaining low spectral-response error.

A Graph Autoregressive Moving Average (GraphARMA or, editor's term, GraphArm) filter is a family of rational-difference graph filters and graph neural architectures that generalize classical ARMA filtering to signals defined on graphs. GraphArm incorporates both autoregressive (AR) and moving-average (MA) components to model long-range, frequency-selective, and temporally evolving dependencies among graph-structured data. This methodology spans distributed signal processing on static and dynamic graphs (Isufi et al., 2016), graph neural network design (Bianchi et al., 2019), and adaptive attention-driven state space models (Eliasof et al., 22 Jan 2025). GraphArm bridges rational graph spectral filtering, distributed recursion, and expressive neural parameterization, with stability and transferability properties that make it central in modern graph learning.

1. Mathematical Definition and Spectral Formulation

The GraphARMA filter of orders (P,Q)(P, Q) is defined in analogy to classical ARMA filters, as a rational function H(λ)H(\lambda) on the spectrum of a graph Laplacian LL. The general form is

H(λ)=b0+b1λ++bQλQa0+a1λ++aPλPH(\lambda) = \frac{b_0 + b_1 \lambda + \dots + b_Q \lambda^{Q}}{a_0 + a_1 \lambda + \dots + a_P \lambda^{P}}

where λ\lambda denotes an eigenvalue of LL, and {bq}\{b_q\} (numerator) and {ap}\{a_p\} (denominator, often a0=1a_0=1) are, respectively, MA and AR coefficients (Isufi et al., 2016, Bianchi et al., 2019, Eliasof et al., 22 Jan 2025).

Applying HH in the graph-Fourier domain, for a signal x=nx,ϕnϕnx=\sum_n \langle x, \phi_n\rangle \phi_n (with Lϕn=λnϕnL\phi_n=\lambda_n\phi_n): Hx=n=1NH(λn)x,ϕnϕnH x = \sum_{n=1}^N H(\lambda_n) \langle x, \phi_n\rangle \phi_n Polynomial choices (all ap>0=0a_{p>0}=0) recover finite impulse response (FIR) or Chebyshev/GNN convolutional filters; general P,QP, Q yield rational spectral responses, enabling sharper or nonlow-pass filtering.

GraphArm traditionally sets the filter coefficients independently of any particular graph, so that H(λ)H(\lambda) is valid for all graphs with Laplacian spectrum in [0,ρ][0, \rho], supporting robustness and transferability (Isufi et al., 2016, Bianchi et al., 2019).

2. Distributed Implementation and Coefficient Design

GraphArm admits efficient vertex-domain realization as distributed, local recursions. Core update types include:

  • ARMA1_1 recursion ("potential-kernel" block):

yt+1=ψLyt+ϕx;zt+1=yt+1+cxy_{t+1} = \psi L y_t + \phi x;\qquad z_{t+1} = y_{t+1} + c x

for scalar coefficients ψ,ϕ,c\psi, \phi, c. The steady-state frequency response becomes H(λ)=c+r/(λp)H(\lambda)=c + r/(\lambda - p) for p=1/ψp=1/\psi, r=ϕ/ψr=-\phi/\psi.

  • Parallel-ARMAK_K: KK independent ARMA1_1 recursions in parallel, summed:

yt+1(k)=ψ(k)Lyt(k)+ϕ(k)x;zt+1=cx+k=1Kyt+1(k)y^{(k)}_{t+1} = \psi^{(k)} L y^{(k)}_t + \phi^{(k)} x; \quad z_{t+1} = c x + \sum_{k=1}^K y^{(k)}_{t+1}

leading to H(λ)=c+k=1Krk/(λpk)H(\lambda)=c+\sum_{k=1}^K r_k/(\lambda-p_k).

  • Periodic-ARMAK_K: One state with KK-periodic coefficients, yielding higher-order rational responses.

Design of AR/MA coefficients often follows a rational approximation of a target H(λ)H^*(\lambda) over [0,ρ][0,\rho] (e.g., Shanks or Padé-type fits), permitting graph-independent universality. Steps include polynomial fitting, matching, and partial fraction decomposition (Isufi et al., 2016). Such realization guarantees that recursions only require local neighbor exchanges per iteration, enabling distributed filtering on large-scale and dynamic graphs.

3. Graph Neural Network Structures and Expressivity

In graph neural architectures, ARMA filters serve as the building block for expressive and robust message passing:

  • In (Bianchi et al., 2019), the ARMAK_K GNN layer is implemented via KK parallel "Graph Convolutional Skip" (GCS) stacks, each iteratively computing

Y(t+1)=σ(A~Y(t)W+XV)Y^{(t+1)} = \sigma\bigl(\tilde A Y^{(t)} W + X V \bigr)

where XX are node features, A~\tilde A is a rescaled adjacency, WW is an AR weight (shared), VV an MA weight, and σ\sigma is typically ReLU. After TT recursions per stack, outputs are averaged.

  • (Eliasof et al., 22 Jan 2025) constructs GRAMA, which extends ARMA recurrences to adapt via selective attention. For each recurrence, the AR and MA coefficients {ϕi}\{\phi_i\} and {θj}\{\theta_j\} are computed dynamically using multi-head attention over pooled feature and residual sequences, conferring adaptability and long-range propagation.

Theoretically, every ARMA(pp, qq) is equivalent to a linear state-space model (SSM) and vice versa, which enables powerful connections to recent state-space graph models and analysis of stability and propagation range (Eliasof et al., 22 Jan 2025).

4. Exact Solutions for Denoising, Interpolation, and Temporal Extensions

GraphArm admits closed-form (graph spectral) or efficiently approximated solutions for classical signal processing tasks:

  • Tikhonov Denoising: The solution to minxxt2+wxLkx\min_x \|x-t\|^2 + w x^\top L^k x is (I+wLk)1t(I + w L^k)^{-1} t; the frequency response H(λ)=1/(1+wλk)H(\lambda)=1/(1+w \lambda^k) is exactly ARMA of order kk.
  • Wiener Denoising: For Laplacian-diagonal noise and signal covariances, the optimal filter H(λ)=Σx(λ)/[Σx(λ)+Σn(λ)]H(\lambda) = \Sigma_x(\lambda)/[\Sigma_x(\lambda)+\Sigma_n(\lambda)] is ARMA whenever the covariances are rational in λ\lambda.
  • Interpolation: Given observations on a node subset, the regularized least-squares solution (S+wLk)1t(S+wL^k)^{-1}t (with mask SS) reduces to ARMA for k=1k=1.
  • Dynamic (Graph × Time) Filters: The basic ARMA1_1 recursion, with a time-varying signal xtx_t, yields a two-dimensional transfer function

H(z,λ)=ϕz1+cz11ψλz1H(z, \lambda) = \frac{\phi z^{-1}+c z^{-1}}{1 - \psi \lambda z^{-1}}

where zz indexes the temporal frequency. This accommodates joint graph-temporal filtering, with stability dictated by ψρ<1|\psi|\rho<1 and the graph spectral radius.

  • Time-Varying Graphs: Stability and exponential convergence carry over if LtL_t varies in time with norm bounded by ρ\rho; error bounds depend on rate of change.

5. Empirical and Theoretical Evaluation

GraphArm and its neural extensions have demonstrated robustness and superior expressivity across a range of benchmarks:

  • Convergence: All ARMA recursions achieve exponential convergence to steady-state, typically in O(logϵ1)O(\log\,\epsilon^{-1}) iterations (Isufi et al., 2016).
  • Approximation: ARMAK_K filters (with K=2K=2–$4$, depth T=1T=1–$3$) provide sharp spectral selectivity and can closely match low- or band-pass targets while avoiding the approximation artifacts common to high-degree polynomial (FIR) filters (Bianchi et al., 2019).
  • Robustness: ARMA filters maintain low spectral-response error under edge failures and topology perturbations, outperforming polynomial counterparts due to their graph-independent, IIR structure (Isufi et al., 2016, Bianchi et al., 2019).
  • Downstream Performance: On node classification, graph-signal labeling, graph classification, and regression tasks, ARMA-GNNs yield statistically superior or on-par results with respect to GCN, Chebyshev, Cayley, and attention-based GNNs. Typical gains appear more pronounced on tasks with larger graphs or long-range dependencies:

| Method | Cora Acc. | PPI Acc. | MUTAG | Proteins | QM9 (μ\mu MSE) | |-----------|-----------|----------|-------|----------|----------------| | GCN | 81.5% | 80.8% | 85.7 | 71.0 | 0.445 | | Chebyshev | 79.5% | 86.4% | 82.6 | 72.1 | 0.433 | | CayleyNet | 81.2% | 84.9% | 87.8 | 65.6 | 0.442 | | ARMAK_K | 83.4% | 90.5%|91.5|73.7 | 0.394 |

(Bianchi et al., 2019)

  • GRAMA (Adaptive ARMA-GNN): On 14 synthetic and real-world benchmarks, GRAMA consistently improves over its backbone models and matches or outperforms SOTA on long-range tasks and challenging heterophilic classification, due to its adaptive, attention-driven ARMA coefficient selection (Eliasof et al., 22 Jan 2025).

6. Stability, Transferability, and Practical Guidance

GraphArm architectures possess strong theoretical stability conditions. For the basic ARMA1_1 recursion, stability requires p>ρ|p|> \rho, for parallel and periodic ARMA, suitable bounds on parameters. Equivalently, for a state-space realization of order pp, the AR characteristic polynomial's roots must lie inside the unit disk, and sufficient stability is provided if ϕj1\sum |\phi_j| \le 1 (Eliasof et al., 22 Jan 2025).

Transferability is ensured by the graph-independent design of the coefficients and the local, sparse recursion. The mapping generalizes to new graphs, as small changes in topology incur only minor changes in filter response (Bianchi et al., 2019, Isufi et al., 2016).

For implementation (Bianchi et al., 2019):

  • K=2K=2–$4$ branches provide adequate expressivity.
  • Depth T=1T=1–$3$ per branch suffices in small-world graphs.
  • l2l_2 regularization and weight sharing across iterations help maintain stability.
  • Dropout on the skip connections promotes filter diversity.
  • The principal computational cost is O(KTEFout)O(K T |E| F_{\text{out}}) per ARMAK_K layer, comparable with Chebyshev polynomial filters.

7. Theoretical Extensions and Connections

GraphArm is theoretically equivalent to discrete linear state-space models (SSMs), allowing the graphical extension of classical systems theory results. Any ARMA(pp, qq) recursion can be mapped to an SSM, and vice versa; depth and expressivity are governed by the roots of the AR polynomial. The design accommodates attention-based, adaptive coefficient selection, as in selective SSMs (Eliasof et al., 22 Jan 2025).

Furthermore, the temporal extension of ARMA filtering as developed in (Isufi et al., 2016) provides a formalism for spatio-temporal separation, selective temporal mode attenuation, and universality of the rational filter class across evolving graphs.


GraphARMA (GraphArm) represents the intersection of rational graph spectral filtering, iterative distributed algorithms, and adaptive neural sequence modeling. Its universality, stability, and robustness to graph perturbations position it as an essential mechanism for scalable, accurate, and transferable graph signal processing and deep learning. (Isufi et al., 2016, Bianchi et al., 2019, Eliasof et al., 22 Jan 2025)

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to GraphArm.