Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Spiking Framework for Graph Neural Networks (2401.05373v3)

Published 15 Dec 2023 in cs.NE, cs.AI, and cs.LG

Abstract: The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks (GNNs) is gradually attracting attention due to the low power consumption and high efficiency in processing the non-Euclidean data represented by graphs. However, as a common problem, dynamic graph representation learning faces challenges such as high complexity and large memory overheads. Current work often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary features instead of continuous ones for efficient training, which would overlooks graph structure information and leads to the loss of details during propagation. Additionally, optimizing dynamic spiking models typically requires propagation of information across time steps, which increases memory requirements. To address these challenges, we present a framework named \underline{Dy}namic \underline{S}p\underline{i}king \underline{G}raph \underline{N}eural Networks (\method{}). To mitigate the information loss problem, \method{} propagates early-layer information directly to the last layer for information compensation. To accommodate the memory requirements, we apply the implicit differentiation on the equilibrium state, which does not rely on the exact reverse of the forward computation. While traditional implicit differentiation methods are usually used for static situations, \method{} extends it to the dynamic graph setting. Extensive experiments on three large-scale real-world dynamic graph datasets validate the effectiveness of \method{} on dynamic node classification tasks with lower computational costs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. Deep equilibrium models. In NeurIPS.
  2. Multiscale deep equilibrium models. In NeurIPS.
  3. Long short-term memory and learning-to-learn in networks of spiking neurons. In NeurIPS, volume 31.
  4. SpikeProp: Backpropagation for Networks of Spiking Neurons Error-Backpropagation in a Network of Spik-ing Neurons. In ESANN.
  5. Simulation of networks of spiking neurons: a review of tools and strategies. Journal of computational neuroscience, 23: 349–398.
  6. Optimized potential initialization for low-latency spiking neural networks. In AAAI.
  7. Relation R-CNN: A Graph Based Relation-Aware Network for Object Detection. IEEE Signal Processing Letters, 27.
  8. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
  9. Connected components in random graphs with given expected degree sequences. Annals of combinatorics, 6(2): 125–145.
  10. da Xu; chuanwei ruan; evren korpeoglu; sushant kumar; and kannan achan. 2020. Inductive representation learning on temporal graphs. In ICLR.
  11. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In IJCNN.
  12. Backpropagation for energy-efficient neuromorphic computing. In NeurIPS, volume 28.
  13. Fast graph representation learning with PyTorch Geometric. arXiv preprint arXiv:1903.02428.
  14. node2vec: Scalable feature learning for networks. In KDD.
  15. Implicit graph neural networks. NeurIPS.
  16. The NBER patent citation data file: Lessons, insights and methodological tools.
  17. Hochreiter, S. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02): 107–116.
  18. Long short-term memory. Neural computation, 9(8): 1735–1780.
  19. Gradient descent for spiking neural networks. In NeurIPS.
  20. Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829.
  21. Spiking-YOLO: spiking neural network for energy-efficient object detection. In AAAI.
  22. Predicting dynamic embedding trajectory in temporal interaction networks. In KDD.
  23. Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks. In AAAI.
  24. MGNNI: Multiscale Graph Neural Networks with Implicit Layers. In NeurIPS.
  25. Temporal network embedding with micro-and macro-dynamics. In CIKM.
  26. Maass, W. 1997. Networks of spiking neurons: the third generation of neural network models. Neural networks, 10(9): 1659–1671.
  27. Training high-performance low-latency spiking neural networks by differentiation on spike representation. In CVPR.
  28. Hierarchical graph attention network for visual relationship detection. In CVPR.
  29. Transfer graph neural networks for pandemic forecasting. In AAAI.
  30. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In AAAI.
  31. Automatic differentiation in pytorch.
  32. Deepwalk: Online learning of social representations. In KDD, 701–710.
  33. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in neuroscience, 12: 774.
  34. Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation. In ICML.
  35. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637.
  36. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Frontiers in neuroscience, 11: 682.
  37. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128.
  38. The graph neural network model. IEEE Transactions on Neural Networks, 20(1): 61–80.
  39. Modeling relational data with graph convolutional networks. In European Semantic Web Conference.
  40. Evolving spiking neural network—a survey. Evolving Systems, 4: 87–98.
  41. GAEN: graph attention evolving networks. In IJCAI.
  42. Slayer: Spike layer error reassignment in time. In NeurIPS.
  43. SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes. In ICML.
  44. A compact review of molecular property prediction with graph neural networks. Drug Discovery Today: Technologies, 37: 1–12.
  45. Training spiking neural networks with accumulated spiking flow. In AAAI.
  46. Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 55(5): 1–37.
  47. Online Training Through Time for Spiking Neural Networks. In NeurIPS.
  48. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. In NeurIPS.
  49. Spatio-Temporal Attentive RNN for Node Classification in Temporal Attributed Graphs. In IJCAI.
  50. Spatial-aware graph relation network for large-scale object detection. In CVPR.
  51. Exploiting spiking dynamics with spatial-temporal feature normalization in graph learning. In IJCAI.
  52. Dynamic hypergraph convolutional network. In ICDE.
  53. DEAL: An Unsupervised Domain Adaptive Framework for Graph-level Classification. In Proceedings of the 30th ACM International Conference on Multimedia, 3470–3479.
  54. CoCo: A Coupled Contrastive Framework for Unsupervised Domain Adaptive Graph Classification. arXiv preprint arXiv:2306.04979.
  55. Messages are never propagated alone: Collaborative hypergraph neural network for time-series forecasting. IEEE Transactions on Pattern Analysis & Machine Intelligence, (01): 1–15.
  56. Dynamic network embedding by modeling triadic closure process. In AAAI.
  57. Temporal-coded deep spiking neural network with easy training and robust performance. In AAAI.
  58. Spiking Graph Convolutional Networks. In IJCAI.
  59. Embedding temporal network via neighborhood formation. In KDD.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nan Yin (33 papers)
  2. Mengzhu Wang (21 papers)
  3. Zhenghan Chen (12 papers)
  4. Giulia De Masi (15 papers)
  5. Bin Gu (86 papers)
  6. Huan Xiong (42 papers)
Citations (3)

Summary

Dynamic Spiking Graph Neural Networks (Dy-SIGN): A Framework for Efficient Node Classification

Introduction

The fusion of Spiking Neural Networks (SNNs) with Graph Neural Networks (GNNs) heralds a promising direction for handling dynamic graphs, which naturally represent evolving data structures such as social networks or citation networks. While the potential of such an amalgamation is immense, particularly in terms of computational efficiency and applicability to non-Euclidean data, significant challenges have hindered its realization. Chief among these are the issues of dynamic graph representation learning, marked by high complexity and substantial memory demands, compounded by the inherent limitation of SNNs in preserving graph structure and detail during information propagation.

Dynamic Spiking Graph Neural Network (Dy-SIGN)

The Dynamic Spiking Graph Neural Network (Dy-SIGN) presents a novel framework addressing these core challenges. It leverages an information compensation mechanism to mitigate loss of graph structural detail in SNNs and employs implicit differentiation on the equilibrium state to manage memory requirements effectively. This technique ensures efficient training without sacrificing performance or detail, verified through extensive experiments on large-scale real-world dynamic graph datasets focusing on dynamic node classification tasks.

Addressing Information Loss and Memory Consumption

Dy-SIGN introduces a two-pronged approach to address the challenges of applying SNNs to dynamic graphs:

  • Information Compensation Mechanism: This mechanism aims to counteract the loss of structural and neighboring node information inherent to SNNs. By establishing a direct information channel between early and final layers of the network, Dy-SIGN integrates original graph details into the feature representations, thus enhancing the quality and accuracy of node classifications.
  • Implicit Differentiation for Dynamic Spiking Graphs: Traditional approaches to optimizing dynamic spiking models demand significant memory resources for information propagation across time steps. Dy-SIGN innovates by applying implicit differentiation to dynamic graph scenarios, reducing memory consumption without relying on forward computation reversal.

Experimental Validation

Comparative testing on three large-scale real-world dynamic graph datasets underscores Dy-SIGN's effectiveness in dynamic node classification tasks. The framework showcases lower computational costs and improved performance across various settings when compared to state-of-the-art methods.

Contributions

The contributions of Dy-SIGN are twofold:

  1. Methodological Advancement: Dy-SIGN represents the inaugural attempt to integrate implicit differentiation within the dynamic graph context, offering a novel perspective on handling information loss and memory consumption in the application of SNNs to dynamic graphs.
  2. Superior Performance: Through rigorous experiments, Dy-SIGN demonstrates its superiority over contemporary methods, validating its effectiveness and efficiency in dynamic node classification tasks.

Future Directions

The advent of Dy-SIGN opens several avenues for future research in AI and neural network modeling, specifically pertaining to the efficient manipulation of dynamic graphs. The exploration into further optimizations and applications of the Dy-SIGN framework promises to enrich the toolbox available to researchers and practitioners dealing with evolving data in complex networks. This investigation into dynamic spiking graph neural networks holds the potential to significantly impact the development and deployment of computationally efficient and memory-conscious AI solutions across a wide array of sectors and applications.

X Twitter Logo Streamline Icon: https://streamlinehq.com