Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

T-GAE: Transferable Graph Autoencoder for Network Alignment (2310.03272v4)

Published 5 Oct 2023 in cs.LG and cs.AI

Abstract: Network alignment is the task of establishing one-to-one correspondences between the nodes of different graphs. Although finding a plethora of applications in high-impact domains, this task is known to be NP-hard in its general form. Existing optimization algorithms do not scale up as the size of the graphs increases. While being able to reduce the matching complexity, current GNN approaches fit a deep neural network on each graph and requires re-train on unseen samples, which is time and memory inefficient. To tackle both challenges we propose T-GAE, a transferable graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment on out-of-distribution graphs without retraining. We prove that GNN-generated embeddings can achieve more accurate alignment compared to classical spectral methods. Our experiments on real-world benchmarks demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively, while being able to reduce 90% of the training time when matching out-of-distribution large scale networks. We conduct ablation studies to highlight the effectiveness of the proposed encoder architecture and training objective in enhancing the expressiveness of GNNs to match perturbed graphs. T-GAE is also proved to be flexible to utilize matching algorithms of different complexities. Our code is available at https://github.com/Jason-Tree/T-GAE.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. The surprising power of graph neural networks with random node initialization. In IJCAI, 2021.
  2. Solving quadratic assignment problems using convex quadratic programming relaxations. Optimization Methods and Software, 16(1-4):49–68, 2001.
  3. Network similarity via multiple social theories. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp.  1439–1440, 2013.
  4. HOGMMNC: a higher order graph matching with multiple network constraints model for gene–drug regulatory modules identification. Bioinformatics, 35(4):602–610, 07 2018. ISSN 1367-4803. doi: 10.1093/bioinformatics/bty662. URL https://doi.org/10.1093/bioinformatics/bty662.
  5. Cone-align: Consistent network alignment with proximity-preserving node embedding. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp.  1985–1988, 2020.
  6. Graph matching applications in pattern recognition and image processing. In Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429), volume 2, pp.  II–21, 2003. doi: 10.1109/ICIP.2003.1246606.
  7. Efficient random graph matching via degree profiles, 2020.
  8. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, jul 2018. doi: 10.1145/3219819.3220025. URL https://doi.org/10.1145%2F3219819.3220025.
  9. Fifty years of graph matching, network alignment and network comparison. Information sciences, 346:180–197, 2016.
  10. Spectral alignment of graphs. IEEE Transactions on Network Science and Engineering, 7(3):1182–1197, 2019.
  11. Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning. Nature Methods, 17(2):184–192, February 2020.
  12. Stability properties of graph neural networks. IEEE Transactions on Signal Processing, 68:5680–5695, 2020.
  13. Unsupervised graph alignment with wasserstein distance discriminator. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, pp.  426–435, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325. doi: 10.1145/3447548.3467332. URL https://doi.org/10.1145/3447548.3467332.
  14. Neural message passing for quantum chemistry. In International conference on machine learning, pp.  1263–1272. PMLR, 2017.
  15. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  855–864, 2016a.
  16. node2vec: Scalable feature learning for networks, 2016b.
  17. Enumeration of cospectral graphs. European Journal of Combinatorics, 25(2):199–211, 2004.
  18. Inductive representation learning on large graphs, 2017. URL https://arxiv.org/abs/1706.02216.
  19. Regal: Representation learning-based graph alignment. In Proceedings of the 27th ACM international conference on information and knowledge management, pp.  117–126, 2018.
  20. It’s who you know: graph mining using recursive structural features. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.  663–671, 2011.
  21. Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. Journal of Cheminformatics, 13(1):12, dec 2021.
  22. Graph neural networks are more powerful than we think. arXiv preprint arXiv:2205.09801, 2022.
  23. Gage: Geometry preserving attributed graph embeddings. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp.  439–448, 2022.
  24. Joint graph embedding and alignment with spectral pivot. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp.  851–859, 2021.
  25. Variational graph auto-encoders, 2016. URL https://arxiv.org/abs/1611.07308.
  26. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  27. Klau, G. W. A new graph-based method for pairwise global network alignment. BMC bioinformatics, 10(1):1–9, 2009.
  28. Graph matching via the lens of supermodularity. IEEE Transactions on Knowledge and Data Engineering, 34(5):2200–2211, 2020.
  29. Assignment problems and the location of economic activities. Econometrica: journal of the Econometric Society, pp.  53–76, 1957.
  30. Kuhn, H. W. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955a.
  31. Kuhn, H. W. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97, 1955b.
  32. Kunegis, J. Konect: the koblenz network collection. Proceedings of the 22nd International Conference on World Wide Web, 2013.
  33. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
  34. Unsupervised belief representation learning with information-theoretic variational graph auto-encoders. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, jul 2022. doi: 10.1145/3477495.3532072. URL https://doi.org/10.1145%2F3477495.3532072.
  35. Random graph matching at otter’s threshold via counting chandeliers, 2023.
  36. Invariant and equivariant graph networks. In International Conference on Learning Representations, 2018.
  37. Community-enhanced de-anonymization of online social networks. In Proceedings of the 2014 acm sigsac conference on computer and communications security, pp.  537–548, 2014.
  38. Discovering patterns in social networks with graph matching algorithms. In Greenberg, A. M., Kennedy, W. G., and Bos, N. D. (eds.), Social Computing, Behavioral-Cultural Modeling and Prediction, pp.  341–349, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. ISBN 978-3-642-37210-0.
  39. Tri-party deep network representation. In Kambhampati, S. (ed.), Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pp.  1895–1901. IJCAI/AAAI Press, 2016.
  40. A new relaxation framework for quadratic assignment problems based on matrix splitting. Mathematical Programming Computation, 2:59–77, 2010.
  41. DeepWalk. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, aug 2014. doi: 10.1145/2623330.2623732. URL https://doi.org/10.1145%2F2623330.2623732.
  42. Graph neural architecture search under distribution shifts. In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S. (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp.  18083–18095. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/qin22b.html.
  43. Correlated stochastic block models: Exact graph matching with applications to recovering communities. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  22259–22273. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/baf4f1a5938b8d520b328c13b51ccf11-Paper.pdf.
  44. The network data repository with interactive graph analytics and visualization. In AAAI, 2015. URL https://networkrepository.com.
  45. Gated graph recurrent neural networks. IEEE Transactions on Signal Processing, 68:6303–6318, 2020.
  46. On the graph fourier transform for directed graphs. IEEE Journal of Selected Topics in Signal Processing, 11(6):796–811, 2017.
  47. Collective classification in network data. AI magazine, 29(3):93–93, 2008.
  48. Pitfalls of graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS 2018, 2018.
  49. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences, 105(35):12763–12768, 2008.
  50. Fast and flexible protein design using deep graph neural networks. Cell Systems, 11(4):402–411.e4, October 2020.
  51. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15. International World Wide Web Conferences Steering Committee, May 2015. doi: 10.1145/2736277.2741093. URL http://dx.doi.org/10.1145/2736277.2741093.
  52. Umeyama, S. An eigendecomposition approach to weighted graph matching problems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(5):695–703, 1988. doi: 10.1109/34.6778.
  53. Fast approximate quadratic programming for graph matching. PLOS one, 10(4):e0121002, 2015.
  54. Measure and improve robustness in nlp models: A survey, 2022.
  55. Graph neural networks in recommender systems: A survey, 2020. URL https://arxiv.org/abs/2011.02260.
  56. Settling the sharp reconstruction thresholds of random graph matching, 2022.
  57. Scalable gromov-wasserstein learning for graph partitioning and matching, 2019a.
  58. How powerful are graph neural networks? In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=ryGs6iA5Km.
  59. Graph convolutional neural networks for web-scale recommender systems. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 10:974–983, June 2018.
  60. Final: Fast attributed network alignment. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pp.  1345–1354, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450342322. doi: 10.1145/2939672.2939766. URL https://doi.org/10.1145/2939672.2939766.
  61. Attributed network alignment: Problem definitions and fast solutions. IEEE Transactions on Knowledge and Data Engineering, 31:1680–1692, 2019. URL https://api.semanticscholar.org/CorpusID:70142000.
  62. Shift-robust gnns: Overcoming the limitations of localized graph training data, 2021. URL https://arxiv.org/abs/2108.01099.
Citations (4)

Summary

We haven't generated a summary for this paper yet.