Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BHGNN-RT: Network embedding for directed heterogeneous graphs (2311.14404v1)

Published 24 Nov 2023 in cs.LG

Abstract: Networks are one of the most valuable data structures for modeling problems in the real world. However, the most recent node embedding strategies have focused on undirected graphs, with limited attention to directed graphs, especially directed heterogeneous graphs. In this study, we first investigated the network properties of directed heterogeneous graphs. Based on network analysis, we proposed an embedding method, a bidirectional heterogeneous graph neural network with random teleport (BHGNN-RT), for directed heterogeneous graphs, that leverages bidirectional message-passing process and network heterogeneity. With the optimization of teleport proportion, BHGNN-RT is beneficial to overcome the over-smoothing problem. Extensive experiments on various datasets were conducted to verify the efficacy and efficiency of BHGNN-RT. Furthermore, we investigated the effects of message components, model layer, and teleport proportion on model performance. The performance comparison with all other baselines illustrates that BHGNN-RT achieves state-of-the-art performance, outperforming the benchmark methods in both node classification and unsupervised clustering tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Heterogeneous graph attention network. In The world wide web conference, 2019.
  2. Heterogeneous graph structure learning for graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 4697–4705, 2021.
  3. Directed graph convolutional network. arXiv preprint arXiv:2004.13970, 2020.
  4. A survey on network embedding. IEEE transactions on knowledge and data engineering, 31(5):833–852, 2018.
  5. Graph convolutional networks: a comprehensive review. Computational Social Networks, 6(1):1–23, 2019.
  6. Global and nodal mutual information maximization in heterogeneous graphs. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5, 2023.
  7. From node embedding to community embedding. arXiv preprint arXiv:1610.09950, 2016.
  8. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  9. Modeling relational data with graph convolutional networks. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, pages 593–607, 2018.
  10. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29, 2016.
  11. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2016.
  12. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861–6871, 2019.
  13. Graph attention networks. stat, 1050:4, 2018.
  14. A recurrent graph neural network for multi-relational data. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8157–8161, 2019.
  15. Predict then propagate: Graph neural networks meet personalized pagerank. In International Conference on Learning Representations, 2018.
  16. Relational graph attention networks. arXiv preprint arXiv:1904.05811, 2019.
  17. Node representation learning for directed graphs. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, September 16–20, 2019, Proceedings, Part I, pages 395–411, 2020.
  18. Message passing attention networks for document understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8544–8551, 2020.
  19. Edge directionality improves learning on heterophilic graphs. arXiv preprint arXiv:2305.10498, 2016.
  20. Representation learning on graphs with jumping knowledge networks. In International conference on machine learning, pages 5453–5462, 2018.
  21. Combining neural networks with personalized pagerank for classification on graphs. In International conference on learning representations, 2019.
  22. The truly deep graph convolutional networks for node classification. arXiv preprint arXiv:1907.10903, 5, 2019.
  23. Transforming pagerank into an infinite-depth graph neural network. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 469–484, 2022.
  24. Weighted betweenness preferential attachment: A new mechanism explaining social network formation and evolution. Scientific reports, 8(1):1–14, 2018.
  25. Analytical results for the in-degree and out-degree distributions of directed random networks that grow by node duplication. Journal of Statistical Mechanics: Theory and Experiment, 2019(8):083403, 2019.
  26. powerlaw: a python package for analysis of heavy-tailed distributions. PloS one, 9(1):e85777, 2014.
  27. D Anderson and K Burnham. Model selection and multi-model inference. Second. NY: Springer-Verlag, 63(2020):10, 2004.
  28. Asymmetry in in-degree and out-degree distributions of large-scale industrial networks. arXiv preprint arXiv:1507.04507, 2015.
  29. Asymmetry in indegree and outdegree distributions of gene regulatory networks arising from dynamical robustness. Physical Review E, 97(6):062315, 2018.
  30. Monoamine and neuropeptide connections significantly alter the degree distributions of the caenorhabditis elegans connectome. NeuroReport, 28(16):1071–1077, 2017.
  31. Graph infoclust: Leveraging cluster-level node information for unsupervised graph representation learning. arXiv preprint arXiv:2009.06946, 2020.
  32. Unsupervised deep manifold attributed graph embedding. arXiv preprint arXiv:2104.13048, 2021.
  33. The pagerank citation ranking: Bringing order to the web. Stanford Digital Library Technologies Project, 1998.
  34. Deep graph infomax. ICLR (Poster), 2(3):4, 2019.
  35. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2018.
  36. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
  37. Automating the construction of internet portals with machine learning. Information Retrieval, 3:127–163, 2000.
  38. Citeseer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries, pages 89–98, 1998.
  39. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52, 2015.
  40. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256, 2010.
  41. Attributed graph clustering: a deep attentional embedding approach. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 3670–3676, 2019.
  42. Filippo Maria Bianchi. Simplifying clustering with graph neural networks. arXiv preprint arXiv:2207.08779, 2022.
  43. The logical expressiveness of graph neural networks. In 8th International Conference on Learning Representations (ICLR 2020), 2020.
  44. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
  45. Power-law distributions in empirical data. SIAM review, 51(4):661–703, 2009.
Citations (1)

Summary

We haven't generated a summary for this paper yet.