Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Torsion Graph Neural Networks (2306.13541v1)

Published 23 Jun 2023 in cs.LG and cs.AI

Abstract: Geometric deep learning (GDL) models have demonstrated a great potential for the analysis of non-Euclidian data. They are developed to incorporate the geometric and topological information of non-Euclidian data into the end-to-end deep learning architectures. Motivated by the recent success of discrete Ricci curvature in graph neural network (GNNs), we propose TorGNN, an analytic Torsion enhanced Graph Neural Network model. The essential idea is to characterize graph local structures with an analytic torsion based weight formula. Mathematically, analytic torsion is a topological invariant that can distinguish spaces which are homotopy equivalent but not homeomorphic. In our TorGNN, for each edge, a corresponding local simplicial complex is identified, then the analytic torsion (for this local simplicial complex) is calculated, and further used as a weight (for this edge) in message-passing process. Our TorGNN model is validated on link prediction tasks from sixteen different types of networks and node classification tasks from three types of networks. It has been found that our TorGNN can achieve superior performance on both tasks, and outperform various state-of-the-art models. This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (63)
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
  2. M. Welling and T. N. Kipf, “Semi-supervised classification with graph convolutional networks,” in J. International Conference on Learning Representations (ICLR 2017), 2016.
  3. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  4. S. Vashishth, S. Sanyal, V. Nitin, and P. Talukdar, “Composition-based multi-relational graph convolutional networks,” arXiv preprint arXiv:1911.03082, 2019.
  5. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  6. A. Fout, J. Byrd, B. Shariat, and A. Ben-Hur, “Protein interface prediction using graph convolutional networks,” Advances in neural information processing systems, vol. 30, 2017.
  7. W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, “Graph neural networks for social recommendation,” in The world wide web conference, 2019, pp. 417–426.
  8. T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Computational intelligenCe magazine, vol. 13, no. 3, pp. 55–75, 2018.
  9. S. Abu-El-Haija, A. Kapoor, B. Perozzi, and J. Lee, “N-GCN: Multi-scale graph convolution for semi-supervised node classification,” in Uncertainty in artificial intelligence.   PMLR, 2020, pp. 841–851.
  10. Y. Shen, J. Qin, J. Chen, M. Yu, L. Liu, F. Zhu, F. Shen, and L. Shao, “Auto-encoding twin-bottleneck hashing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2818–2827.
  11. X. Ju, S. Farrell, P. Calafiura, D. Murnane, L. Gray, T. Klijnsma, K. Pedro, G. Cerati, J. Kowalkowski, G. Perdue et al., “Graph neural networks for particle reconstruction in high energy physics detectors,” arXiv preprint arXiv:2003.11603, 2020.
  12. K. Rusek and P. Chołda, “Message-passing neural networks learn little’s law,” IEEE Communications Letters, vol. 23, no. 2, pp. 274–277, 2018.
  13. S. Abadal, A. Jain, R. Guirado, J. López-Alonso, and E. Alarcón, “Computing graph neural networks: A survey from algorithms to accelerators,” ACM Computing Surveys (CSUR), vol. 54, no. 9, pp. 1–38, 2021.
  14. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI Open, vol. 1, pp. 57–81, 2020.
  15. S. Sukhbaatar, R. Fergus et al., “Learning multiagent communication with backpropagation,” Advances in neural information processing systems, vol. 29, 2016.
  16. Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, “Gated graph sequence neural networks,” arXiv preprint arXiv:1511.05493, 2015.
  17. Z. Ye, K. S. Liu, T. Ma, J. Gao, and C. Chen, “Curvature graph network,” in International Conference on Learning Representations, 2019.
  18. J. Chen, T. Ma, and C. Xiao, “FastGCN: fast learning with graph convolutional networks via importance sampling,” arXiv preprint arXiv:1801.10247, 2018.
  19. W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh, “Cluster-GCN: An efficient algorithm for training deep and large graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 257–266.
  20. H. Gao, Z. Wang, and S. Ji, “Large-scale learnable graph convolutional networks,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1416–1424.
  21. S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” in Thirty-second AAAI conference on artificial intelligence, 2018.
  22. R. Li, S. Wang, F. Zhu, and J. Huang, “Adaptive graph convolutional neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  23. F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in International conference on machine learning.   PMLR, 2019, pp. 6861–6871.
  24. W. Yu, C. Zheng, W. Cheng, C. C. Aggarwal, D. Song, B. Zong, H. Chen, and W. Wang, “Learning deep network representations with adversarially regularized autoencoders,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2663–2671.
  25. N. De Cao and T. Kipf, “MolGAN: An implicit generative model for small molecular graphs,” arXiv preprint arXiv:1805.11973, 2018.
  26. R. Das, S. Dhuliawala, M. Zaheer, L. Vilnis, I. Durugkar, A. Krishnamurthy, A. Smola, and A. McCallum, “Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning,” arXiv preprint arXiv:1711.05851, 2017.
  27. J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein, “Understanding over-squashing and bottlenecks on graphs via curvature,” International Conference on Learning Representations (ICLR 2022), 2021.
  28. J. Li, X. Fu, Q. Sun, C. Ji, J. Tan, J. Wu, and H. Peng, “Curvature graph generative adversarial networks,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 1528–1537.
  29. A. Grigor’yan, Y. Lin, and S.-T. Yau, “Torsion of digraphs and path complexes,” arXiv preprint arXiv:2012.07302, 2020.
  30. D. B. Ray and I. M. Singer, “R-torsion and the laplacian on riemannian manifolds,” Advances in Mathematics, vol. 7, no. 2, pp. 145–210, 1971.
  31. K. Reidemeister, “Homotopieringe und linsenräume,” in Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, vol. 11, no. 1.   Springer, 1935, pp. 102–109.
  32. W. Müller, “Analytic torsion and R-torsion of riemannian manifolds,” Advances in Mathematics, vol. 28, no. 3, pp. 233–305, 1978.
  33. J. Cheeger, “Analytic torsion and Reidemeister torsion,” Proceedings of the National Academy of Sciences, vol. 74, no. 7, pp. 2651–2654, 1977.
  34. N. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, and P. Riley, “Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds,” arXiv preprint arXiv:1802.08219, 2018.
  35. M. Finzi, S. Stanton, P. Izmailov, and A. G. Wilson, “Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data,” in International Conference on Machine Learning.   PMLR, 2020, pp. 3165–3176.
  36. V. G. Satorras, E. Hoogeboom, and M. Welling, “E(n) equivariant graph neural networks,” in International conference on machine learning.   PMLR, 2021, pp. 9323–9332.
  37. L. Sun, Z. Zhang, J. Ye, H. Peng, J. Zhang, S. Su, and S. Y. Philip, “A self-supervised mixed-curvature graph neural network,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 4, 2022, pp. 4146–4155.
  38. I. Chami, Z. Ying, C. Ré, and J. Leskovec, “Hyperbolic graph convolutional neural networks,” Advances in neural information processing systems, vol. 32, 2019.
  39. Y. Zhang, X. Wang, C. Shi, X. Jiang, and Y. Ye, “Hyperbolic graph attention network,” IEEE Transactions on Big Data, vol. 8, no. 6, pp. 1690–1701, 2021.
  40. Q. Liu, M. Nickel, and D. Kiela, “Hyperbolic graph neural networks,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  41. X. Fu, J. Li, J. Wu, Q. Sun, C. Ji, S. Wang, J. Tan, H. Peng, and S. Y. Philip, “ACE-HGNN: Adaptive curvature exploration hyperbolic graph neural network,” in 2021 IEEE International Conference on Data Mining (ICDM).   IEEE, 2021, pp. 111–120.
  42. Z. Wu, D. Jiang, C.-Y. Hsieh, G. Chen, B. Liao, D. Cao, and T. Hou, “Hyperbolic relational graph convolution networks plus: a simple but highly efficient QSAR-modeling method,” Briefings in Bioinformatics, vol. 22, no. 5, p. bbab112, 2021.
  43. K. Luck, D.-K. Kim, L. Lambourne, K. Spirohn, B. E. Begg, W. Bian, R. Brignall, T. Cafarelli, F. J. Campos-Laborie, B. Charloteaux et al., “A reference map of the human binary protein interactome,” Nature, vol. 580, no. 7803, pp. 402–408, 2020.
  44. Z. Marinka, S. Rok, M. Sagar, , and L. Jure, “BioSNAP Datasets: Stanford biomedical network dataset collection,” http://snap.stanford.edu/biodata, Aug. 2018.
  45. J. Piñero, À. Bravo, N. Queralt-Rosinach, A. Gutiérrez-Sacristán, J. Deu-Pons, E. Centeno, J. García-García, F. Sanz, and L. I. Furlong, “DisGeNET: a comprehensive platform integrating information on human disease-associated genes and variants,” Nucleic acids research, p. gkw943, 2016.
  46. D. S. Wishart, Y. D. Feunang, A. C. Guo, E. J. Lo, A. Marcu, J. R. Grant, T. Sajed, D. Johnson, C. Li, Z. Sayeeda et al., “DrugBank 5.0: a major update to the drugbank database for 2018,” Nucleic acids research, vol. 46, no. D1, pp. D1074–D1082, 2018.
  47. B. Rozemberczki and R. Sarkar, “Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models,” in Proceedings of the 29th ACM international conference on information & knowledge management, 2020, pp. 1325–1334.
  48. B. Rozemberczki, C. Allen, and R. Sarkar, “Multi-scale attributed node embedding,” 2019.
  49. B. Rozemberczki, R. Davies, R. Sarkar, and C. Sutton, “GEMSEC: Graph embedding with self clustering,” in Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2019.   ACM, 2019, pp. 65–72.
  50. J. Leskovec, J. Kleinberg, and C. Faloutsos, “Graph evolution: Densification and shrinking diameters,” ACM transactions on Knowledge Discovery from Data (TKDD), vol. 1, no. 1, pp. 2–es, 2007.
  51. ——, “Graphs over time: densification laws, shrinking diameters and possible explanations,” in Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, 2005, pp. 177–187.
  52. Z. Yang, W. Cohen, and R. Salakhudinov, “Revisiting semi-supervised learning with graph embeddings,” in International conference on machine learning.   PMLR, 2016, pp. 40–48.
  53. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “LightGCN: Simplifying and powering graph convolution network for recommendation,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 639–648.
  54. K. Huang, C. Xiao, L. M. Glass, M. Zitnik, and J. Sun, “SkipGNN: predicting molecular interactions with skip-graph networks,” Scientific reports, vol. 10, no. 1, pp. 1–16, 2020.
  55. X. Wang, T. Huang, D. Wang, Y. Yuan, Z. Liu, X. He, and T.-S. Chua, “Learning intents behind interactions with knowledge graph for recommendation,” in Proceedings of the Web Conference 2021, 2021, pp. 878–887.
  56. B. Perozzi, R. Al-Rfou, and S. Skiena, “DeepWalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 701–710.
  57. J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “Line: Large-scale information network embedding,” in Proceedings of the 24th international conference on world wide web, 2015, pp. 1067–1077.
  58. D. Wang, P. Cui, and W. Zhu, “Structural deep network embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016, pp. 1225–1234.
  59. M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples.” Journal of machine learning research, vol. 7, no. 11, 2006.
  60. J. Weston, F. Ratle, and R. Collobert, “Deep learning via semi-supervised embedding,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1168–1175.
  61. J. Atwood and D. Towsley, “Diffusion-convolutional neural networks,” Advances in neural information processing systems, vol. 29, 2016.
  62. Y. Rong, W. Huang, T. Xu, and J. Huang, “Dropedge: Towards deep graph convolutional networks on node classification,” arXiv preprint arXiv:1907.10903, 2019.
  63. L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of machine learning research, vol. 9, no. 11, 2008.

Summary

We haven't generated a summary for this paper yet.