Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
86 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
53 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

HeteGraph-Mamba: Heterogeneous Graph Learning via Selective State Space Model (2405.13915v1)

Published 22 May 2024 in cs.LG and cs.SI

Abstract: We propose a heterogeneous graph mamba network (HGMN) as the first exploration in leveraging the selective state space models (SSSMs) for heterogeneous graph learning. Compared with the literature, our HGMN overcomes two major challenges: (i) capturing long-range dependencies among heterogeneous nodes and (ii) adapting SSSMs to heterogeneous graph data. Our key contribution is a general graph architecture that can solve heterogeneous nodes in real-world scenarios, followed an efficient flow. Methodologically, we introduce a two-level efficient tokenization approach that first captures long-range dependencies within identical node types, and subsequently across all node types. Empirically, we conduct comparisons between our framework and 19 state-of-the-art methods on the heterogeneous benchmarks. The extensive comparisons demonstrate that our framework outperforms other methods in both the accuracy and efficiency dimensions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Ali Behrouz and Farnoosh Hashemi. 2024. Graph mamba: Towards learning on graphs with state space models. arXiv preprint arXiv:2402.08678.
  2. Diffmg: Differentiable meta graph search for heterogeneous graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21. ACM.
  3. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. In Proceedings of The Web Conference 2020, WWW ’20. ACM.
  4. Albert Gu and Tri Dao. 2023. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752.
  5. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396.
  6. James D Hamilton. 1994. State-space models. Handbook of econometrics, 4:3039–3080.
  7. Inductive representation learning on large graphs. Advances in neural information processing systems, 30.
  8. A generalization of vit/mlp-mixer to graphs.
  9. An attention-based graph neural network for heterogeneous structural learning.
  10. Ogb-lsc: A large-scale challenge for machine learning on graphs. arXiv preprint arXiv:2103.09430.
  11. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133.
  12. Heterogeneous graph transformer. In Proceedings of the web conference 2020, pages 2704–2710.
  13. Dual-path mamba: Short and long-term bidirectional selective structured state space models for speech separation. arXiv preprint arXiv:2403.18257.
  14. Pure transformers are powerful graph learners.
  15. Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks.
  16. Long-range meta-path search on large-scale heterogeneous graphs.
  17. Differentiable meta multigraph search with partial message propagation on heterogeneous information networks.
  18. Stg-mamba: Spatial-temporal graph learning via selective state space model. arXiv preprint arXiv:2403.12418.
  19. Pointmamba: A simple state space model for point cloud analysis. arXiv preprint arXiv:2402.10739.
  20. Mamba4rec: Towards efficient sequential recommendation with selective state space models. arXiv preprint arXiv:2403.03900.
  21. Are we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 1150–1160.
  22. Are we really making much progress? revisiting, benchmarking, and refining heterogeneous graph neural networks.
  23. Hinormer: Representation learning on heterogeneous information networks with graph transformer.
  24. Modeling relational data with graph convolutional networks. In The semantic web: 15th international conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, proceedings 15, pages 593–607. Springer.
  25. Exphormer: Sparse transformers for graphs.
  26. Attention is all you need.
  27. Graph attention networks.
  28. Graph-mamba: Towards long-range graph sequence modeling with selective state spaces. arXiv preprint arXiv:2402.00789.
  29. Heterogeneous graph attention network.
  30. Simple and efficient heterogeneous graph neural network. In Proceedings of the AAAI Conference on Artificial Intelligence.
  31. Scalable graph neural networks for heterogeneous graphs. arXiv preprint arXiv:2011.09679.
  32. Graph transformer networks.
  33. Heterogeneous graph neural network. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 793–803.
  34. Point could mamba: Point cloud learning via state space model. arXiv preprint arXiv:2403.00762.
  35. Relation structure-aware heterogeneous graph neural network. In 2019 IEEE international conference on data mining (ICDM), pages 1534–1539. IEEE.
Citations (1)

Summary

We haven't generated a summary for this paper yet.