Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Learning under Distribution Shifts: A Comprehensive Survey on Domain Adaptation, Out-of-distribution, and Continual Learning (2402.16374v2)

Published 26 Feb 2024 in cs.LG and cs.SI

Abstract: Graph learning plays a pivotal role and has gained significant attention in various application scenarios, from social network analysis to recommendation systems, for its effectiveness in modeling complex data relations represented by graph structural data. In reality, the real-world graph data typically show dynamics over time, with changing node attributes and edge structure, leading to the severe graph data distribution shift issue. This issue is compounded by the diverse and complex nature of distribution shifts, which can significantly impact the performance of graph learning methods in degraded generalization and adaptation capabilities, posing a substantial challenge to their effectiveness. In this survey, we provide a comprehensive review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning. Concretely, according to the observability of distributions in the inference stage and the availability of sufficient supervision information in the training stage, we categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning. For each scenario, a detailed taxonomy is proposed, with specific descriptions and discussions of existing progress made in distribution-shifted graph learning. Additionally, we discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field. The survey is positioned to provide general guidance for the development of effective graph learning algorithms in handling graph distribution shifts, and to stimulate future research and advancements in this area.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (126)
  1. J. Wang, G. Song, Y. Wu, and L. Wang, “Streaming graph neural networks via continual learning,” in Proceedings of the 29th ACM international conference on information & knowledge management, 2020, pp. 1515–1524.
  2. H. Wei, G. Hu, W. Bai, S. Xia, and Z. Pan, “Lifelong representation learning in dynamic attributed networks,” Neurocomputing, vol. 358, 05 2019.
  3. M. Pilanci and E. Vural, “Domain adaptation on graphs by learning aligned graph bases,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  4. Q. Dai, X.-M. Wu, J. Xiao, X. Shen, and D. Wang, “Graph transfer learning via adversarial domain adaptation with graph convolution,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 5, pp. 4908–4922, 2022.
  5. Y. Ouyang, B. Guo, X. Tang, X. He, J. Xiong, and Z. Yu, “Learning cross-domain representation with multi-graph neural network,” CoRR, vol. abs/1905.10095, 2019.
  6. Y. Wang, C. Li, W. Jin, R. Li, J. Zhao, J. Tang, and X. Xie, “Test-time training for graph neural networks,” arXiv preprint arXiv:2210.08813, 2022.
  7. G. Chen, J. Zhang, X. Xiao, and Y. Li, “Graphtta: Test time adaptation on graph neural networks,” arXiv preprint arXiv:2208.09126, 2022.
  8. W. Jin, T. Zhao, J. Ding, Y. Liu, J. Tang, and N. Shah, “Empowering graph representation learning with test-time graph transformation,” arXiv preprint arXiv:2210.03561, 2022.
  9. H. Li, X. Wang, Z. Zhang, and W. Zhu, “Ood-gnn: Out-of-distribution generalized graph neural network,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  10. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
  11. Z. Li, Q. Wu, F. Nie, and J. Yan, “Graphde: A generative framework for debiased learning and out-of-distribution detection on graphs,” Advances in Neural Information Processing Systems, vol. 35, pp. 30 277–30 290, 2022.
  12. C. Zheng, X. Fan, S. Pan, H. Jin, Z. Peng, Z. Wu, C. Wang, and S. Y. Philip, “Spatio-temporal joint graph convolutional networks for traffic forecasting,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  13. Z. Tan, K. Ding, R. Guo, and H. Liu, “Graph few-shot class-incremental learning,” in Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022, pp. 987–996.
  14. X. Zhang, D. Song, and D. Tao, “Hierarchical prototype networks for continual graph representation learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
  15. M. Mancini, S. R. Bulo, B. Caputo, and E. Ricci, “Adagraph: Unifying predictive and continuous domain adaptation through graphs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6568–6577.
  16. F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, and H. Liu, “Graph learning: A survey,” CoRR, vol. abs/2105.00696, 2021.
  17. Y. Ji, L. Zhang, J. Wu, B. Wu, L. Li, L.-K. Huang, T. Xu, Y. Rong, J. Ren, D. Xue et al., “Drugood: Out-of-distribution dataset curator and benchmark for ai-aided drug discovery–a focus on affinity prediction problems with noise annotations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 7, 2023, pp. 8023–8031.
  18. J. Qiu, J. Tang, H. Ma, Y. Dong, K. Wang, and J. Tang, “Deepinf: Social influence prediction with deep learning,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, Y. Guo and F. Farooq, Eds.   ACM, 2018, pp. 2110–2119.
  19. L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Normalizing flow-based neural process for few-shot knowledge graph completion,” arXiv preprint arXiv:2304.08183, 2023.
  20. S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: A survey,” ACM Comput. Surv., vol. 55, no. 5, pp. 97:1–97:37, 2023. [Online]. Available: https://doi.org/10.1145/3535101
  21. X. Zheng, M. Zhang, C. Chen, C. Li, C. Zhou, and S. Pan, “Multi-relational graph neural architecture search with fine-grained message passing,” in 2022 IEEE International Conference on Data Mining (ICDM).   IEEE, 2022, pp. 783–792.
  22. Q. Wang, Z. Mao, B. Wang, and L. Guo, “Knowledge graph embedding: A survey of approaches and applications,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 12, pp. 2724–2743, 2017. [Online]. Available: https://doi.org/10.1109/TKDE.2017.2754499
  23. D. Jin, L. Wang, Y. Zheng, G. Song, F. Jiang, X. Li, W. Lin, and S. Pan, “Dual intent enhanced graph neural network for session-based new item recommendation,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 684–693.
  24. B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,” in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI), 2018.
  25. H. Li, X. Wang, Z. Zhang, and W. Zhu, “Out-of-distribution generalization on graphs: A survey,” arXiv preprint arXiv:2202.07987, 2022.
  26. C. Pomeroy, R. M. Bond, P. J. Mucha, and S. J. Cranmer, “Dynamics of social network emergence explain network evolution,” Scientific Reports, vol. 10, no. 1, p. 21876, 2020.
  27. M. Bardoscia, P. Barucca, S. Battiston, F. Caccioli, G. Cimini, D. Garlaschelli, F. Saracco, T. Squartini, and G. Caldarelli, “The physics of financial networks,” Nature Reviews Physics, vol. 3, no. 7, pp. 490–507, 2021.
  28. D. Acemoglu, A. Ozdaglar, and A. Tahbaz-Salehi, “Systemic risk and stability in financial networks,” American Economic Review, vol. 105, no. 2, pp. 564–608, 2015.
  29. H. Wu, C. M. Eckhardt, and A. A. Baccarelli, “Molecular mechanisms of environmental exposures and human disease,” Nature Reviews Genetics, vol. 24, no. 5, pp. 332–344, 2023.
  30. T. Ercan, N. C. Onat, O. Tatari, and J.-D. Mathias, “Public transportation adoption requires a paradigm shift in urban development structure,” Journal of cleaner production, vol. 142, pp. 1789–1799, 2017.
  31. F. Alam, S. Joty, and M. Imran, “Domain adaptation with adversarial training and graph embeddings,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 1077–1087.
  32. M. Wu, S. Pan, X. Zhu, C. Zhou, and L. Pan, “Domain-adversarial graph neural networks for text classification,” in 2019 IEEE International Conference on Data Mining (ICDM).   IEEE, 2019, pp. 648–657.
  33. Y. Chen, Y. Zhang, Y. Bian, H. Yang, M. Kaili, B. Xie, T. Liu, B. Han, and J. Cheng, “Learning causally invariant representations for out-of-distribution generalization on graphs,” Advances in Neural Information Processing Systems, vol. 35, pp. 22 131–22 148, 2022.
  34. H. Li, Z. Zhang, X. Wang, and W. Zhu, “Learning invariant graph representations for out-of-distribution generalization,” in Advances in Neural Information Processing Systems, 2022.
  35. A. Baranwal, K. Fountoulakis, and A. Jagannath, “Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization,” in International Conference on Machine Learning.   PMLR, 2021, pp. 684–693.
  36. D. Maltoni and V. Lomonaco, “Continuous learning in single-incremental-task scenarios,” Neural Networks, vol. 116, pp. 56–73, 2019.
  37. M. Biesialska, K. Biesialska, and M. R. Costa-jussà, “Continual lifelong learning in natural language processing: A survey,” in Proceedings of the 28th International Conference on Computational Linguistics.   Barcelona, Spain (Online): International Committee on Computational Linguistics, Dec. 2020, pp. 6523–6541.
  38. P. Ruvolo and E. Eaton, “ELLA: An efficient lifelong learning algorithm,” in Proceedings of the 30th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, S. Dasgupta and D. McAllester, Eds., vol. 28, no. 1.   Atlanta, Georgia, USA: PMLR, 17–19 Jun 2013, pp. 507–515. [Online]. Available: https://proceedings.mlr.press/v28/ruvolo13.html
  39. A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” CoRR, vol. abs/1606.04671, 2016. [Online]. Available: http://arxiv.org/abs/1606.04671
  40. V. Lomonaco and D. Maltoni, “Core50: a new dataset and benchmark for continuous object recognition,” CoRR, vol. abs/1705.03550, 2017. [Online]. Available: http://arxiv.org/abs/1705.03550
  41. D. Das and C. G. Lee, “Unsupervised domain adaptation using regularized hyper-graph matching,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 3758–3762.
  42. Y. Zhang, S. Miao, and R. Liao, “Structural domain adaptation with latent graph alignment,” in 2018 25th IEEE International Conference on Image Processing (ICIP).   IEEE, 2018, pp. 3753–3757.
  43. M. Wu, S. Pan, C. Zhou, X. Chang, and X. Zhu, “Unsupervised domain adaptive graph convolutional networks,” in Proceedings of The Web Conference 2020, 2020, pp. 1457–1467.
  44. J. Yang, K. Zhou, Y. Li, and Z. Liu, “Generalized out-of-distribution detection: A survey,” arXiv preprint arXiv:2110.11334, 2021.
  45. Q. Yuan, S.-U. Guan, P. Ni, T. Luo, K. L. Man, P. Wong, and V. Chang, “Continual graph learning: A survey,” arXiv preprint arXiv:2301.12230, 2023.
  46. F. G. Febrinanto, F. Xia, K. Moore, C. Thapa, and C. Aggarwal, “Graph lifelong learning: A survey,” IEEE Computational Intelligence Magazine, vol. 18, no. 1, pp. 32–51, 2023.
  47. Y. Yu, W.-Y. Qu, N. Li, and Z. Guo, “Open-category classification by adversarial sample generation,” in International Joint Conference on Artificial Intelligence, 2017, pp. 3357–3363.
  48. Y. Luo, Z. Wang, Z. Huang, and M. Baktashmotlagh, “Progressive graph learning for open-set domain adaptation,” in International Conference on Machine Learning.   PMLR, 2020, pp. 6468–6478.
  49. X. Ma, T. Zhang, and C. Xu, “Gcan: Graph convolutional adversarial network for unsupervised domain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8266–8276.
  50. X. Yang, C. Deng, T. Liu, and D. Tao, “Heterogeneous graph attention network for unsupervised multiple-target domain adaptation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 4, pp. 1992–2003, 2020.
  51. X. Shen, Q. Dai, S. Mao, F.-l. Chung, and K.-S. Choi, “Network together: Node classification via cross-network deep network embedding,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 5, pp. 1935–1948, 2020.
  52. X. Shen, Q. Dai, F.-l. Chung, W. Lu, and K.-S. Choi, “Adversarial deep network embedding for cross-network node classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 03, 2020, pp. 2991–2999.
  53. X. Shen, S. Pan, K.-S. Choi, and X. Zhou, “Domain-adaptive message passing graph neural network,” Neural Networks, vol. 164, pp. 439–454, 2023.
  54. X. Shen, M. Shao, S. Pan, L. T. Yang, and X. Zhou, “Domain-adaptive graph attention-supervised network for cross-network edge classification,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  55. T. Zhao, Y. Liu, L. Neves, O. Woodford, M. Jiang, and N. Shah, “Data augmentation for graph neural networks,” in Proceedings of the aaai conference on artificial intelligence, vol. 35, no. 12, 2021, pp. 11 015–11 023.
  56. X. Shen, D. Sun, S. Pan, X. Zhou, and L. T. Yang, “Neighbor contrastive learning on learnable graph augmentation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 8, 2023, pp. 9782–9791.
  57. W. Feng, J. Zhang, Y. Dong, Y. Han, H. Luan, Q. Xu, Q. Yang, E. Kharlamov, and J. Tang, “Graph random neural networks for semi-supervised learning on graphs,” Advances in neural information processing systems, vol. 33, pp. 22 092–22 103, 2020.
  58. M. McPherson, L. Smith-Lovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,” Annual review of sociology, vol. 27, no. 1, pp. 415–444, 2001.
  59. M. Ding, K. Kong, J. Chen, J. Kirchenbauer, M. Goldblum, D. Wipf, F. Huang, and T. Goldstein, “A closer look at distribution shifts and out-of-distribution generalization on graphs,” in NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications, 2021.
  60. J. Ma, P. Cui, K. Kuang, X. Wang, and W. Zhu, “Disentangled graph convolutional networks,” in International conference on machine learning.   PMLR, 2019, pp. 4212–4221.
  61. K. Kuang, P. Cui, S. Athey, R. Xiong, and B. Li, “Stable prediction across unknown environments,” in proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1617–1626.
  62. S. Zhang, K. Kuang, J. Qiu, J. Yu, Z. Zhao, H. Yang, Z. Zhang, and F. Wu, “Stable prediction on graphs with agnostic distribution shift,” arXiv preprint arXiv:2110.03865, 2021.
  63. W. Hu, B. Liu, J. Gomes, M. Zitnik, P. Liang, V. Pande, and J. Leskovec, “Strategies for pre-training graph neural networks,” arXiv preprint arXiv:1905.12265, 2019.
  64. X. Zhao, F. Chen, S. Hu, and J.-H. Cho, “Uncertainty aware semi-supervised learning on graph data,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 827–12 836, 2020.
  65. M. Stadler, B. Charpentier, S. Geisler, D. Zügner, and S. Günnemann, “Graph posterior network: Bayesian predictive uncertainty for node classification,” Advances in Neural Information Processing Systems, vol. 34, pp. 18 033–18 048, 2021.
  66. T. Huang, D. Wang, Y. Fang, and Z. Chen, “End-to-end open-set semi-supervised node classification with out-of-distribution detection,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, L. D. Raedt, Ed.   International Joint Conferences on Artificial Intelligence Organization, 7 2022, pp. 2087–2093.
  67. Y. Liu, K. Ding, H. Liu, and S. Pan, “Good-d: On unsupervised graph out-of-distribution detection,” in Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 2023, pp. 339–347.
  68. Q. Zhang, Z. Shi, X. Zhang, X. Chen, P. Fournier-Viger, and S. Pan, “G2pxy: generative open-set node classification on graphs with proxy unknowns,” in Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023, pp. 4576–4583.
  69. Q. Zhang, Q. Li, X. Chen, P. Zhang, S. Pan, P. Fournier-Viger, and J. Z. Huang, “A dynamic variational framework for open-world node classification in structured sequences,” in 2022 IEEE International Conference on Data Mining (ICDM).   IEEE, 2022, pp. 703–712.
  70. M. Wu, S. Pan, and X. Zhu, “Openwgl: Open-world graph learning,” in 2020 IEEE international conference on data mining (icdm).   IEEE, 2020, pp. 681–690.
  71. Q. Wu, C. Yang, and J. Yan, “Towards open-world feature extrapolation: An inductive graph learning approach,” Advances in Neural Information Processing Systems, vol. 34, pp. 19 435–19 447, 2021.
  72. M. Mancini, M. F. Naeem, Y. Xian, and Z. Akata, “Learning graph embeddings for open world compositional zero-shot learning,” IEEE Transactions on pattern analysis and machine intelligence, 2022.
  73. B. Shi and T. Weninger, “Open-world knowledge graph completion,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  74. R. Das, A. Godbole, N. Monath, M. Zaheer, and A. McCallum, “Probabilistic case-based reasoning for open-world knowledge graph completion,” in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 4752–4765.
  75. L. Niu, C. Fu, Q. Yang, Z. Li, Z. Chen, Q. Liu, and K. Zheng, “Open-world knowledge graph completion with multiple interaction attention,” World Wide Web, vol. 24, pp. 419–439, 2021.
  76. C. Wang, Y. Qiu, D. Gao, and S. Scherer, “Lifelong graph learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 13 719–13 728.
  77. L. Hedegaard, N. Heidari, and A. Iosifidis, “Continual spatio-temporal graph convolutional networks,” Pattern Recogn., vol. 140, no. C, may 2023.
  78. P. Zhang, Y. Yan, C. Li, S. Wang, X. Xie, G. Song, and S. Kim, “Continual learning on dynamic graphs via parameter isolation,” in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’23.   New York, NY, USA: Association for Computing Machinery, 2023, p. 601–611.
  79. X. Kou, Y. Lin, S. Liu, P. Li, J. Zhou, and Y. Zhang, “Disentangle-based Continual Graph Representation Learning,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).   Online: Association for Computational Linguistics, Nov. 2020, pp. 2961–2972.
  80. H. Liu, Y. Yang, and X. Wang, “Overcoming catastrophic forgetting in graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 10, 2021, pp. 8653–8661.
  81. Y. Xu, Y. Zhang, W. Guo, H. Guo, R. Tang, and M. Coates, “Graphsail: Graph structure aware incremental learning for recommender systems,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management.   Association for Computing Machinery, 2020, p. 2861–2868.
  82. L. Sun, J. Ye, H. Peng, F. Wang, and P. S. Yu, “Self-supervised continual graph learning in adaptive riemannian spaces,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 4, pp. 4633–4642, 2023.
  83. Y. Cui, Y. Wang, Z. Sun, W. Liu, Y. Jiang, K. Han, and W. Hu, “Lifelong embedding learning and transfer for growing knowledge graphs,” in AAAI, 2023.
  84. F. Zhou and C. Cao, “Overcoming catastrophic forgetting in graph neural networks with experience replay,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4714–4722.
  85. L. Galke, I. Vagliano, and A. Scherp, “Incremental training of graph neural networks on temporal graphs under distribution shift,” CoRR, vol. abs/2006.14422, 2020.
  86. M. Perini, G. Ramponi, P. Carbone, and V. Kalavri, “Learning on streaming graphs with experience replay,” in Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, 2022, pp. 470–478.
  87. K. Ahrabian, Y. Xu, Y. Zhang, J. Wu, Y. Wang, and M. Coates, “Structure aware experience replay for incremental learning in graph-based recommender systems,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 2832–2836.
  88. X. Chen, J. Wang, and K. Xie, “Trafficstream: A streaming traffic flow forecasting framework based on graph neural networks and continual learning,” CoRR, vol. abs/2106.06273, 2021.
  89. A. Zaman, F. Yangyu, M. S. Ayub, M. Irfan, L. Guoyun, and L. Shiya, “Cmdgat: Knowledge extraction and retention based continual graph attention network for point cloud registration,” Expert Syst. Appl., vol. 214, no. C, mar 2023.
  90. Z. Li and D. Hoiem, “Learning without forgetting,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds.   Cham: Springer International Publishing, 2016, pp. 614–629.
  91. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  92. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5533–5542, 2016.
  93. D. Lopez-Paz and M. Ranzato, “Gradient episodic memory for continuum learning,” CoRR, vol. abs/1706.08840, 2017. [Online]. Available: http://arxiv.org/abs/1706.08840
  94. D. Maltoni and V. Lomonaco, “Continuous learning in single-incremental-task scenarios,” CoRR, vol. abs/1806.08568, 2018. [Online]. Available: http://arxiv.org/abs/1806.08568
  95. J. Lee, J. Yoon, E. Yang, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,” CoRR, vol. abs/1708.01547, 2017.
  96. G. Sliwoski, S. Kothiwale, J. Meiler, and E. W. Lowe, “Computational methods in drug discovery,” Pharmacological reviews, vol. 66, no. 1, pp. 334–395, 2014.
  97. W. Gao, S. P. Mahajan, J. Sulam, and J. J. Gray, “Deep learning in protein structural modeling and design,” Patterns, vol. 1, no. 9, 2020.
  98. J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko et al., “Highly accurate protein structure prediction with alphafold,” Nature, vol. 596, no. 7873, pp. 583–589, 2021.
  99. J. Lim, S. Ryu, K. Park, Y. J. Choe, J. Ham, and W. Y. Kim, “Predicting drug–target interaction using a novel graph neural network with 3d structure-embedded graph representation,” Journal of chemical information and modeling, vol. 59, no. 9, pp. 3981–3988, 2019.
  100. M. Karimi, D. Wu, Z. Wang, and Y. Shen, “Deepaffinity: interpretable deep learning of compound–protein affinity through unified recurrent and convolutional neural networks,” Bioinformatics, vol. 35, no. 18, pp. 3329–3338, 2019.
  101. D. Morselli Gysi, Í. Do Valle, M. Zitnik, A. Ameli, X. Gan, O. Varol, S. D. Ghiassian, J. Patten, R. A. Davey, J. Loscalzo et al., “Network medicine framework for identifying drug-repurposing opportunities for covid-19,” Proceedings of the National Academy of Sciences, vol. 118, no. 19, p. e2025581118, 2021.
  102. S. Ji, S. Pan, E. Cambria, P. Marttinen, and S. Y. Philip, “A survey on knowledge graphs: Representation, acquisition, and applications,” IEEE transactions on neural networks and learning systems, vol. 33, no. 2, pp. 494–514, 2021.
  103. J. Wang, B. Wang, M. Qiu, S. Pan, B. Xiong, H. Liu, L. Luo, T. Liu, Y. Hu, B. Yin et al., “A survey on temporal knowledge graph completion: Taxonomy, progress, and prospects,” arXiv preprint arXiv:2308.02457, 2023.
  104. S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu, “Unifying large language models and knowledge graphs: A roadmap,” IEEE Transactions on Knowledge and Data Engineering, 2023.
  105. L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Reasoning on graphs: Faithful and interpretable large language model reasoning,” arXiv preprint arXiv:2310.01061, 2023.
  106. L. Luo, J. Ju, B. Xiong, Y.-F. Li, G. Haffari, and S. Pan, “Chatrule: Mining logical rules with large language models for knowledge graph reasoning,” arXiv preprint arXiv:2309.01538, 2023.
  107. M. Jin, Y.-F. Li, and S. Pan, “Neural temporal walks: Motif-aware representation learning on continuous-time dynamic graphs,” Advances in Neural Information Processing Systems, vol. 35, pp. 19 874–19 886, 2022.
  108. M. Jin, Y. Zheng, Y.-F. Li, S. Chen, B. Yang, and S. Pan, “Multivariate time series forecasting with dynamic graph neural odes,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  109. X. Zheng, Y. Liu, Z. Bao, M. Fang, X. Hu, A. W.-C. Liew, and S. Pan, “Towards data-centric graph machine learning: Review and outlook,” arXiv preprint arXiv:2309.10979, 2023.
  110. Y. Guo, C. Yang, Y. Chen, J. Liu, C. Shi, and J. Du, “A data-centric framework to endow graph neural networks with out-of-distribution detection ability,” 2023.
  111. Q. Wu, H. Zhang, J. Yan, and D. Wipf, “Handling distribution shifts on graphs: An invariance perspective,” in International Conference on Learning Representations, 2021.
  112. Y. Sui, Q. Wu, J. Wu, Q. Cui, L. Li, J. Zhou, X. Wang, and X. He, “Unleashing the power of graph data augmentation on covariate distribution shift,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  113. X. Zheng, M. Zhang, C. Chen, S. Molaei, C. Zhou, and S. Pan, “Gnnevaluator: Evaluating gnn performance on unseen graphs without labels,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  114. Y. Ektefaie, G. Dasoulas, A. Noori, M. Farhat, and M. Zitnik, “Multimodal learning with graphs,” Nature Machine Intelligence, vol. 5, no. 4, pp. 340–350, 2023.
  115. M. Yoon, J. Y. Koh, B. Hooi, and R. Salakhutdinov, “Multimodal graph learning for generative tasks,” 2023.
  116. Y. Guo, F. Yin, X.-h. Li, X. Yan, T. Xue, S. Mei, and C.-L. Liu, “Visual traffic knowledge graph generation from scene images,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 21 604–21 613.
  117. Y. Wang, N. Lipka, R. A. Rossi, A. Siu, R. Zhang, and T. Derr, “Knowledge graph prompting for multi-document question answering,” arXiv preprint arXiv:2308.11730, 2023.
  118. Q. Huang, H. Ren, P. Chen, G. Kržmanc, D. Zeng, P. Liang, and J. Leskovec, “Prodigy: Enabling in-context learning over graphs,” arXiv preprint arXiv:2305.12600, 2023.
  119. H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trustworthy graph neural networks: Aspects, methods and trends,” arXiv preprint arXiv:2205.07424, 2022.
  120. B. Wu, H. Zhang, X. Yang, S. Wang, M. Xue, S. Pan, and X. Yuan, “Graphguard: Detecting and counteracting training data misuse in graph neural networks,” arXiv preprint arXiv:2312.07861, 2023.
  121. H. Zhang, B. Wu, S. Wang, X. Yang, M. Xue, S. Pan, and X. Yuan, “Demystifying uneven vulnerability of link stealing attacks against graph neural networks,” in International Conference on Machine Learning.   PMLR, 2023, pp. 41 737–41 752.
  122. H. Zhang, X. Yuan, C. Zhou, and S. Pan, “Projective ranking-based gnn evasion attacks,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  123. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, 2019.
  124. Y. Liu, K. Ding, Q. Lu, F. Li, L. Y. Zhang, and S. Pan, “Towards self-interpretable graph-level anomaly detection,” in Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  125. D. Jin, L. Wang, H. Zhang, Y. Zheng, W. Ding, F. Xia, and S. Pan, “A survey on fairness-aware recommender systems,” arXiv preprint arXiv:2306.00403, 2023.
  126. H. Zhang, X. Yuan, Q. V. H. Nguyen, and S. Pan, “On the interaction between node fairness and edge privacy in graph neural networks,” arXiv preprint arXiv:2301.12951, 2023.
Citations (4)

Summary

  • The paper systematically categorizes graph learning approaches addressing distribution shifts, detailing domain adaptation, out-of-distribution generalization, and continual learning.
  • It introduces innovative frameworks like AdaGCN, GraphDE, and ContinualGNN, demonstrating enhanced model robustness and adaptability in varied applications.
  • The survey offers actionable insights into overcoming dynamic graph data challenges and outlines promising directions for future research in trustworthy graph learning.

Navigating Distribution Shifts in Graph Learning: A Comprehensive Survey

Introduction

Graph structural data, a significant aspect of numerous real-world domains such as biological networks, social networks, and recommendation systems, presents unique challenges due to its dynamic nature exemplified by evolving relationships and changing node attributes over time. This dynamic evolution often leads to distribution shifts in graph data, a phenomenon that significantly impairs the generalization and adaptation capabilities of graph learning models. Addressing distribution shifts in graph learning is pivotal for deploying effective models in real-world scenarios. This comprehensive survey explores the latest methodologies and strategies for graph learning under distribution shifts, primarily categorized into graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning. Each category is dissected based on the observability of distributions in the inference stage and the availability of supervision information in the training stage, offering a systematic analysis of the existing solutions, and setting the stage for discussions on potential practical applications and future directions in this field.

Categorization of Graph Learning Methods Under Distribution Shifts

Graph Domain Adaptation Learning

Graph Domain Adaptation Learning focuses on adapting graph learning models from a source to a target domain where the graph data distributions differ. This adaptation ensures model proficiency in the target domain, which is crucial for applications across varying domains. The methods under this category are further classified into semi-supervised, unsupervised, and test-time graph transformation (adaptation) approaches, each tailored to specific scenarios of domain adaptability and knowledge transfer.

Graph Out-of-Distribution Learning

Graph Out-of-Distribution Learning addresses the challenge of learning from graphs with unseen distributions, aiming to generalize well to completely novel test graphs. This section introduces methods targeting Graph Out-of-Distribution Generalization, Detection, and Open-world Graph Learning. These approaches enable models to handle unseen classes and distributions effectively.

Graph Continual Learning

Graph Continual Learning deals with learning from a stream of graph data over time. This category emits strategies designed to incorporate new information from evolving graphs while retaining previously learned knowledge. Architectural, regularization, rehearsal, and hybrid approaches under this umbrella highlight mechanisms to counteract the forgetting of old tasks and ensure the seamless integration of new graph data into the learning process.

Methodological Insights and Frameworks

The survey presents a rich analysis of various graph learning methods under distribution shifts, discussing their theoretical foundations and practical implementations. Highlights include innovative frameworks like AdaGCN for graph domain adaptation, methodologies such as GraphDE for graph out-of-distribution detection, and novel concepts like ContinualGNN for graph continual learning. Each methodological insight contributes to understanding the complex nature of graph data and offers solutions to enhance model robustness and adaptability across different graph distribution shifts.

Applications and Future Directions

The implications of effectively addressing graph distribution shifts extend across multiple fields. Applications range from AI-aided drug discovery, where graph learning can accelerate the identification of new pharmaceutical compounds, to personalized recommendation systems which benefit from adapted and continually learning graph models. Furthermore, the integration of graph learning in intelligent transportation systems and open-world knowledge exploration exemplifies the broad utilitarian value of advancing this research area.

Looking forward, the survey identifies promising research directions including data-centric graph learning, cross-modality distribution shift exploration, and the formulation of comprehensive task-driven evaluation protocols. Additionally, fostering trustworthy graph learning under distribution shifts by enhancing aspects such as robustness, explainability, privacy, and fairness represents crucial future milestones.

Conclusion

This survey underscores the significance of tackling distribution shifts in graph learning to harness the full potential of graph structural data across a wide array of practical applications. By systematically categorizing existing methods, highlighting potential applications, and pointing towards uncharted research avenues, it lays a foundation for future investigations and developments in graph learning under distribution shifts. As the field progresses, addressing these challenges will undoubtedly lead to more robust, adaptive, and trustworthy graph learning models, propelling forward the capabilities of AI in handling complex real-world data dynamics.