Papers
Topics
Authors
Recent
Search
2000 character limit reached

Spiking Graph Prompt Feature (SpikingGPF)

Updated 13 January 2026
  • SpikingGPF is a sparse, event-driven prompt mechanism that integrates discrete IF/LIF neuron models to efficiently encode temporal and structural graph data.
  • It employs spiking neuron dynamics for selective atom activation and prompt generation, reducing computational overhead and enhancing noise robustness.
  • Empirical results show that SpikingGPF improves few-shot classification accuracy and scalability on large graphs while maintaining significant memory savings.

Spiking Graph Prompt Feature (SpikingGPF) denotes a class of sparse, spiking neuron-driven prompt mechanisms for graph neural networks (GNNs) that integrate event-driven neuronal dynamics with prompt-based transfer and adaptation strategies. SpikingGPF methods leverage discrete Integrate-and-Fire (IF) or Leaky Integrate-and-Fire (LIF) neuron architectures to induce sparse activations in prompt selection and feature manipulation, which both promotes computation and memory efficiency and enhances robustness to noisy input. In this framework, node-level prompts are represented as either time-coded spike sequences or as sparse combinations of learned prompt atoms, effectively encoding both temporal and structural information pertinent to each node’s local graph neighborhood (Li et al., 2022, Jiang et al., 6 Jan 2026).

1. Motivation and Context

Prompt-based adaptation has become standard for efficiently transferring large pre-trained GNNs to downstream tasks. Classic Graph Prompt Feature (GPF) learning introduces KK learnable prompt "atoms" B∈Rd×KB \in \mathbb{R}^{d \times K} for node features X∈Rn×dX \in \mathbb{R}^{n \times d}, then adapts the model by learning a combination coefficient si∈ΔKs_i \in \Delta^K for each node ii to produce prompt vector pi=∑k=1Ksikbkp_i = \sum_{k=1}^K s_{ik} b_k and prompts X+PX + P for the downstream task. However, dense GPFs modify all feature dimensions and atom slots per node, resulting in redundancy, high sensitivity to noise, and computational overhead (Jiang et al., 6 Jan 2026).

SpikingGPF addresses these limitations by employing spiking neuron mechanisms to realize two forms of sparsity:

  • Sparse atom selection: Only a limited subset of prompt atoms are activated per node.
  • Sparse prompt vectors: Only a few feature dimensions are modified per node, mitigating noise propagation and enabling focused adaptation (Jiang et al., 6 Jan 2026).

These event-driven prompt features also align with the asynchronous nature and temporal evolution of many real-world graphs (Li et al., 2022).

2. Spiking Neuron Dynamics Underpinning SpikingGPF

SpikingGPF utilizes discrete-time Integrate-and-Fire (IF) or Leaky Integrate-and-Fire (LIF) neuron models to produce its sparse representations.

Integrate-and-Fire Update Equations

  • For each time step tt, membrane potential v(t)v^{(t)} integrates external drive α\alpha:

v~(t)=v(t−1)+α\tilde v^{(t)} = v^{(t-1)} + \alpha

  • Neuron emits a spike (binary output h(t)h^{(t)}) if v~(t)≥μ\tilde v^{(t)} \geq \mu:

h(t)={1,v~(t)≥μ 0,otherwiseh^{(t)} = \begin{cases} 1, & \tilde v^{(t)} \ge \mu \ 0, & \text{otherwise} \end{cases}

  • Potential resets as v(t)=v~(t)−μh(t)v^{(t)} = \tilde v^{(t)} - \mu h^{(t)}.

Over TT steps, the average firing rate h=(1/T)∑t=1Th(t)h = (1/T) \sum_{t=1}^{T} h^{(t)} serves as a sparse code (Jiang et al., 6 Jan 2026).

The LIF variant introduces decay ("leak"):

Vit=Vit−1+(1/τm)[−(Vit−1−Vreset)+Iit]V_i^t = V_i^{t-1} + (1/\tau_m)[-(V_i^{t-1} - V_\text{reset}) + I_i^t]

with adaptive firing threshold Vi,tht=τthVi,tht−1+γsitV_{i,\text{th}}^{t} = \tau_\text{th} V_{i,\text{th}}^{t-1} + \gamma s_i^t (Li et al., 2022).

These spike-based mechanisms natively encourage sparsity and temporally selective encoding, critical for scalable representations and noise resilience.

3. SpikingGPF Architectures

Two major architectural variants have emerged:

A. Spiking Graph Prompt Feature for Temporal Graph Representation

Nodes process temporally evolving neighborhood messages, encoded via spiking LIF networks:

  1. Input: Analog node features xvtx_v^t at each time tt.
  2. Encoding: Features are injected as input currents to the spiking layer.
  3. Spiking Message Passing:
    • Nodes aggregate binary spikes from sampled neighbors at each time.
    • Updates follow a K-layer LIF network, producing spike outputs {svt,(k)}\{s_v^{t,(k)}\}.
  4. Pooling: Concatenation or aggregation across time forms the time-coded SpikingGPF embedding zvz_v.
  5. Downstream use: SpikingGPF serves as the prompt input for classification or decoding tasks (Li et al., 2022).

B. SpikingGPF for Sparse Prompt Learning in Pretrained GNNs

Here, two spiking modules operate as follows:

  • S-learning (Sparse Atom Selection):
    • Node features xix_i mapped to KK drives via αik=wk⊤xi\alpha_{ik} = w_k^\top x_i.
    • IF neurons translate these to spike rates hikh_{ik}, sparsifying atom selection.
    • A softmax enforces simplex constraint for combination vector sis_i.
  • P-learning (Sparse Prompt Generation):
    • For each node, BsiB s_i is fed to signed IF neurons, producing sparse prompt vectors pi∈Rdp_i \in \mathbb{R}^d.
  • Integration: Prompted graph X~=X+P\tilde X = X + P is processed by frozen encoder fΦf_\Phi, and only prompt parameters and head are updated (Jiang et al., 6 Jan 2026).

This design encourages extremely compact, interpretable adaptation.

4. Practical Properties and Computational Complexity

SpikingGPF methods enjoy several salient properties:

  • Sparsity: Average firing rates result in 20–30% active units per step, inducing 70–80% zero operations (Li et al., 2022).
  • Memory Efficiency: Binary intermediate states use 1 bit versus 32-bit floats, realizing 32× memory savings in spiking graph representation learning.
  • Parameter and Time Complexity: Prompt learning adds only O(dK)\mathcal{O}(dK) parameters and incurs O(nKT)\mathcal{O}(nKT) (S-learning) and O(ndT)\mathcal{O}(ndT) (P-learning) overhead, negligible compared to GNN message passing, with recommended values K≤20K \leq 20, T≤4T \leq 4 (Jiang et al., 6 Jan 2026).
  • Scalability: For event-driven temporal GNNs, overall epoch time complexity is O(T∣V∣SKd2)\mathcal{O}(T|V|S^K d^2), scaling linearly with number of nodes and time steps when S,K,dS, K, d are small (Li et al., 2022).

5. Empirical Performance and Robustness

Extensive evaluations reveal the following:

  • Accuracy in Few-Shot Settings: SpikingGPF surpasses all dense GPF baselines by 2–5 points in one-shot accuracy across diverse node-classification benchmarks, especially under severe label scarcity (Jiang et al., 6 Jan 2026).
  • Noise Robustness: Under random or adversarial perturbations, such as random edge attack (20–100%) or Metattack (5–10%), SpikingGPF typically degrades by only ∼\sim10%, compared with ∼\sim25% for dense GPFs, and retains an 8-point lead in adversarial robustness (Jiang et al., 6 Jan 2026).
  • Scalable Dynamic Graph Learning: On large graphs (e.g., 2.7M nodes, 13.9M edges), temporal SpikingGPF as part of SpikeNet yields Macro-F1 ≈ 83.9%, Micro-F1 ≈ 83.8%, outperforming TGAT and dense RNN-based methods at half the parameter count and 4× faster per epoch (Li et al., 2022).

6. Summary of Mechanisms and Construction Pipeline

The SpikingGPF pipeline consists of:

  1. Encoding node features as spiking neuron drives.
  2. For t=1…Tt=1 \dots T, sampling graph neighborhoods, aggregating neighbor spikes, and updating neuron states via IF/LIF dynamics.
  3. Pooling spike trains to form node-specific, time-coded or sparse prompt embeddings.
  4. Supplying these embeddings to a downstream classifier, typically with the base GNN encoder frozen.

This construction yields event-driven, compact node embeddings or prompt vectors, capturing graph structure, temporal evolution, and enabling robust and scalable adaptation to new tasks (Li et al., 2022, Jiang et al., 6 Jan 2026).

7. Ablation Studies and Hyperparameter Selection

  • Ablation: Disabling either S-learning or P-learning (dense alternatives) results in diminished performance; the full SpikingGPF (S + P) consistently outperforms other variants, with S-only or P-only providing incremental gains over dense GPF.
  • Hyperparameters:
    • Number of atoms: K=20K=20
    • IF steps: T=2T=2 or $4$
    • Atom threshold: μ≈0.01\mu \approx 0.01
    • Prompt threshold: γ≈0.35\gamma \approx 0.35
    • Learning rate: 1×10−31\times10^{-3}
    • Hidden dimension: $256$
    • Batch size: $128$
    • Larger μ,γ\mu, \gamma increase sparsity but can under-prompt if too large (Jiang et al., 6 Jan 2026).
  • Surrogate Gradient: Slope α=1.0\alpha=1.0 balances accurate Θ approximation against gradient stability (Li et al., 2022).

A plausible implication is that the benefits of SpikingGPF are maximized when both sparse atom selection and sparse prompt vector construction are employed, reflecting the complementary roles of structural and feature-level sparsity in adaptive graph representation.


References:

  • "Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks" (Li et al., 2022)
  • "When Prompting Meets Spiking: Graph Sparse Prompting via Spiking Graph Prompt Learning" (Jiang et al., 6 Jan 2026)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Spiking Graph Prompt Feature (SpikingGPF).