Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
113 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Boosting Graph Pooling with Persistent Homology (2402.16346v3)

Published 26 Feb 2024 in cs.LG and math.AT

Abstract: Recently, there has been an emerging trend to integrate persistent homology (PH) into graph neural networks (GNNs) to enrich expressive power. However, naively plugging PH features into GNN layers always results in marginal improvement with low interpretability. In this paper, we investigate a novel mechanism for injecting global topological invariance into pooling layers using PH, motivated by the observation that filtration operation in PH naturally aligns graph pooling in a cut-off manner. In this fashion, message passing in the coarsened graph acts along persistent pooled topology, leading to improved performance. Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. A non-negative factorization approach to node pooling in graph convolutional neural networks. In AI* IA 2019–Advances in Artificial Intelligence: XVIIIth International Conference of the Italian Association for Artificial Intelligence, Rende, Italy, November 19–22, 2019, Proceedings 18, pp.  294–306. Springer, 2019.
  2. The expressive power of pooling in graph neural networks. Advances in neural information processing systems, 36, 2023.
  3. Spectral clustering with graph neural networks for graph pooling. In International conference on machine learning, pp.  874–883. PMLR, 2020.
  4. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022.
  5. Towards sparse hierarchical graph classifiers. arXiv preprint arXiv:1811.01287, 2018.
  6. Perslay: A neural network layer for persistence diagrams and new graph topological signatures. In International Conference on Artificial Intelligence and Statistics, pp.  2786–2796. PMLR, 2020.
  7. Weighted graph cuts without eigenvectors a multilevel approach. IEEE transactions on pattern analysis and machine intelligence, 29(11):1944–1957, 2007.
  8. Computational topology: an introduction. American Mathematical Society, 2022.
  9. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
  10. Desco: Towards generalizable and scalable deep subgraph counting. arXiv preprint arXiv:2308.08198, 2023.
  11. Graph u-nets. In international conference on machine learning, pp.  2083–2092. PMLR, 2019.
  12. Neural message passing for quantum chemistry. In International conference on machine learning, pp.  1263–1272. PMLR, 2017.
  13. Exploiting edge features for graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  9211–9219, 2019.
  14. Understanding pooling in graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  15. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017.
  16. Deep learning with topological signatures. Advances in neural information processing systems, 30, 2017.
  17. Graph filtration learning. In International Conference on Machine Learning, pp.  4314–4323. PMLR, 2020.
  18. Topological graph neural networks. arXiv preprint arXiv:2102.07835, 2021.
  19. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020.
  20. Boosting the cycle counting power of graph neural networks with i2superscript𝑖2i^{2}italic_i start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gnns. In The Eleventh International Conference on Learning Representations, 2022.
  21. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
  22. Finding frequent subgraphs in longitudinal social network data using a weighted graph mining approach. In Advanced Data Mining and Applications: 6th International Conference, ADMA 2010, Chongqing, China, November 19-21, 2010, Proceedings, Part I 6, pp.  405–416. Springer, 2010.
  23. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  24. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  25. An efficient algorithm for detecting frequent subgraphs in biological networks. Bioinformatics, 20(suppl_1):i200–i207, 2004.
  26. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
  27. Self-attention graph pooling. In International conference on machine learning, pp.  3734–3743. PMLR, 2019.
  28. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp.  723–731, 2019.
  29. Rethinking pooling in graph neural networks. Advances in Neural Information Processing Systems, 33:2220–2231, 2020.
  30. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp.  4602–4609, 2019.
  31. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020.
  32. Müller, E. Graph clustering with graph neural networks. Journal of Machine Learning Research, 24:1–21, 2023.
  33. The rise of fragment-based drug discovery. Nature chemistry, 1(3):187–192, 2009.
  34. Attentive statistics pooling for deep speaker embedding. arXiv preprint arXiv:1803.10963, 2018.
  35. Automatic differentiation in pytorch. 2017.
  36. Asap: Adaptive structure aware pooling for learning hierarchical graph representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp.  5470–5477, 2020.
  37. Rieck, B. On the expressivity of persistent homology in graph learning. arXiv preprint arXiv:2302.09826, 2023.
  38. A persistent weisfeiler-lehman procedure for graph classification. In International Conference on Machine Learning, pp.  5448–5458. PMLR, 2019.
  39. Curvature filtrations for graph generative model evaluation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  40. Forman curvature for complex networks. Journal of Statistical Mechanics: Theory and Experiment, 2016(6):063206, 2016.
  41. Persgnn: applying topological data analysis and geometric deep learning to structure-based protein function prediction. arXiv preprint arXiv:2010.16027, 2020.
  42. Forman persistent ricci curvature (fprc)-based machine learning models for protein–ligand binding affinity prediction. Briefings in Bioinformatics, 22(6):bbab136, 2021.
  43. Structural entropy guided graph hierarchical pooling. In International conference on machine learning, pp.  24017–24030. PMLR, 2022.
  44. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
  45. Neural approximation of graph topological features. Advances in Neural Information Processing Systems, 35:33357–33370, 2022.
  46. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018.
  47. Identity-aware graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp.  10737–10745, 2021.
  48. Persistence enhanced graph neural network. In Chiappa, S. and Calandra, R. (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp.  2896–2906. PMLR, 26–28 Aug 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.