Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural Networks (2307.02813v3)

Published 6 Jul 2023 in cs.LG and cs.SI

Abstract: Dynamic graph data mining has gained popularity in recent years due to the rich information contained in dynamic graphs and their widespread use in the real world. Despite the advances in dynamic graph neural networks (DGNNs), the rich information and diverse downstream tasks have posed significant difficulties for the practical application of DGNNs in industrial scenarios. To this end, in this paper, we propose to address them by pre-training and present the Contrastive Pre-Training Method for Dynamic Graph Neural Networks (CPDG). CPDG tackles the challenges of pre-training for DGNNs, including generalization capability and long-short term modeling capability, through a flexible structural-temporal subgraph sampler along with structural-temporal contrastive pre-training schemes. Extensive experiments conducted on both large-scale research and industrial dynamic graph datasets show that CPDG outperforms existing methods in dynamic graph pre-training for various downstream tasks under three transfer settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, “Graph convolutional neural networks for web-scale recommender systems,” in Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2018, pp. 974–983.
  2. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  3. Y. Bei, S. Zhou, Q. Tan, H. Xu, H. Chen, Z. Li, and J. Bu, “Reinforcement neighborhood selection for unsupervised graph anomaly detection,” in 2023 IEEE International Conference on Data Mining (ICDM), 2023.
  4. R. Pastor-Satorras and A. Vespignani, “Epidemic spreading in scale-free networks,” Physical review letters, vol. 86, no. 14, p. 3200, 2001.
  5. R. M. May and A. L. Lloyd, “Infection dynamics on scale-free networks,” Physical Review E, vol. 64, no. 6, p. 066112, 2001.
  6. Y. Li, C. Huang, L. Ding, Z. Li, Y. Pan, and X. Gao, “Deep learning in bioinformatics: Introduction, application, and perspective in the big data era,” Methods, vol. 166, pp. 4–21, 2019.
  7. J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žıdek, A. Potapenko et al., “Highly accurate protein structure prediction with alphafold,” Nature, vol. 596, no. 7873, pp. 583–589, 2021.
  8. S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: a survey,” ACM Computing Surveys, vol. 55, no. 5, pp. 1–37, 2022.
  9. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “Lightgcn: Simplifying and powering graph convolution network for recommendation,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 639–648.
  10. Y. Bei, H. Chen, S. Chen, X. Huang, S. Zhou, and F. Huang, “Non-recursive cluster-scale graph interacted model for click-through rate prediction,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 3748–3752.
  11. J. Skarding, B. Gabrys, and K. Musial, “Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey,” IEEE Access, vol. 9, pp. 79 143–79 168, 2021.
  12. D. Xu, C. Ruan, E. Körpeoglu, S. Kumar, and K. Achan, “Proceedings of the inductive representation learning on temporal graphs,” in International Conference on Learning Representations, 2020.
  13. G. H. Nguyen, J. B. Lee, R. A. Rossi, N. K. Ahmed, E. Koh, and S. Kim, “Continuous-time dynamic network embeddings,” in Companion proceedings of the web conference 2018, 2018, pp. 969–976.
  14. S. Kumar, X. Zhang, and J. Leskovec, “Predicting dynamic embedding trajectory in temporal interaction networks,” in Proceedings of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2019, pp. 1269–1278.
  15. R. Trivedi, M. Farajtabar, P. Biswal, and H. Zha, “Dyrep: Learning representations over dynamic graphs,” in International Conference on Learning Representations, 2019.
  16. E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein, “Temporal graph networks for deep learning on dynamic graphs,” in ICML 2020 Workshop on Graph Representation Learning, 2020.
  17. J. You, T. Du, and J. Leskovec, “Roland: graph learning framework for dynamic graphs,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2358–2366.
  18. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” 2019, pp. 4171–4186.
  19. S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, “Transformers in vision: A survey,” ACM computing surveys (CSUR), vol. 54, no. 10s, pp. 1–41, 2022.
  20. W. Hu, B. Liu, J. Gomes, M. Zitnik, P. Liang, V. S. Pande, and J. Leskovec, “Strategies for pre-training graph neural networks,” in International Conference on Learning Representations, 2020.
  21. J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang, “Gcc: Graph contrastive coding for graph neural network pre-training,” in Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2020, pp. 1150–1160.
  22. Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun, “Gpt-gnn: Generative pre-training of graph neural networks,” in Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2020, pp. 1857–1867.
  23. Y. Lu, X. Jiang, Y. Fang, and C. Shi, “Learning to pre-train graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 5, 2021, pp. 4276–4284.
  24. M. Sun, K. Zhou, X. He, Y. Wang, and X. Wang, “Gppt: Graph pre-training and prompt tuning to generalize graph neural networks,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 1717–1727.
  25. Z. Liu, X. Yu, Y. Fang, and X. Zhang, “Graphprompt: Unifying pre-training and downstream tasks for graph neural networks,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 417–428.
  26. S. Tian, R. Wu, L. Shi, L. Zhu, and T. Xiong, “Self-supervised representation learning on dynamic graphs,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1814–1823.
  27. P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax,” in International Conference on Learning Representations, 2019.
  28. K.-J. Chen, J. Zhang, L. Jiang, Y. Wang, and Y. Dai, “Pre-training on dynamic graph neural networks,” Neurocomputing, vol. 500, pp. 679–687, 2022.
  29. L. Sun, J. Ye, H. Peng, and P. S. Yu, “A self-supervised riemannian gnn with time varying curvature for temporal graph learning,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1827–1836.
  30. Z. Zhang, P. Cui, and W. Zhu, “Deep learning on graphs: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2020.
  31. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” 2017.
  32. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems, vol. 30, 2017.
  33. P. Velickovic, G. Cucurull, A. R. Arantxa Casanova, P. Lio, and Y. Bengio, “Graph attention networks,” in International Conference on Learning Representations, 2018.
  34. Y. Zhang, Y. Bei, S. Yang, H. Chen, Z. Li, L. Chen, and F. Huang, “Alleviating behavior data imbalance for multi-behavior graph collaborative filtering,” in 2023 IEEE International Conference on Data Mining Workshops (ICDMW), 2023.
  35. Y. Wang, Y.-Y. Chang, Y. Liu, J. Leskovec, and P. Li, “Inductive representation learning in temporal networks via causal anonymous walks,” in International Conference on Learning Representations, 2020.
  36. X. Jiang, T. Jia, Y. Fang, C. Shi, Z. Lin, and H. Wang, “Pre-training on large-scale heterogeneous graph,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2021, pp. 756–766.
  37. R. N. Lichtenwalter, J. T. Lussier, and N. V. Chawla, “New perspectives and methods in link prediction,” in Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2010, pp. 243–252.
  38. L. R. Medsker and L. Jain, “Recurrent neural networks,” Design and Applications, vol. 5, pp. 64–67, 2001.
  39. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  40. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” in NIPS 2014 Workshop on Deep Learning, 2014.
  41. Z. Yu, J. Lian, A. Mahmoody, G. Liu, and X. Xie, “Adaptive user modeling with long and short-term preferences for personalized recommendation.” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, pp. 4213–4219.
  42. H. Chi, H. Xu, M. Liu, Y. Bei, S. Zhou, D. Liu, and M. Zhang, “Modeling spatiotemporal periodicity and collaborative signal for local-life service recommendation,” arXiv preprint arXiv:2309.12565, 2023.
  43. H. Sun, G. Yu, P. Zhang, B. Zhang, X. Wang, and D. Wang, “Graph based long-term and short-term interest model for click-through rate prediction,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 1818–1826.
  44. B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2014, pp. 701–710.
  45. Z. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec, “Hierarchical graph representation learning with differentiable pooling,” in Advances in Neural Information Processing Systems, vol. 31, 2018.
  46. D. P. Vassileios Balntas, Edgar Riba and K. Mikolajczyk, “Learning local feature descriptors with triplets and shallow convolutional neural networks,” in Proceedings of the British Machine Vision Conference, 2016.
  47. U. Ruby and V. Yendapalli, “Binary cross entropy with deep learning technique for image classification,” Int. J. Adv. Trends Comput. Sci. Eng, vol. 9, no. 10, 2020.
  48. R. Burch, F. N. Najm, P. Yang, and T. N. Trick, “A monte carlo approach for power estimation,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 1, no. 1, pp. 63–71, 1993.
  49. J. Ni, J. Li, and J. McAuley, “Justifying recommendations using distantly-labeled reviews and fine-grained aspects,” in Proceedings of EMNLP-IJCNLP, 2019, pp. 188–197.
  50. Y. Liu, W. Wei, A. Sun, and C. Miao, “Exploiting geographical neighborhood characteristics for location recommendation,” in Proceedings of the 23rd ACM International Conference on Information & Knowledge Management, 2014, pp. 739–748.
  51. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” in International Conference on Learning Representations, 2019.
  52. Z. Hou, X. Liu, Y. Cen, Y. Dong, H. Yang, C. Wang, and J. Tang, “Graphmae: Self-supervised masked graph autoencoders,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 594–604.
  53. G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, X. Ma, Y. Yan, J. Jin, H. Li, and K. Gai, “Deep interest network for click-through rate prediction,” in Proceedings of the 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2018, pp. 1059–1068.
Citations (6)

Summary

We haven't generated a summary for this paper yet.