Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DyGCL: Dynamic Graph Contrastive Learning For Event Prediction (2404.15612v1)

Published 24 Apr 2024 in cs.SI

Abstract: Predicting events such as political protests, flu epidemics, and criminal activities is crucial to proactively taking necessary measures and implementing required responses to address emerging challenges. Capturing contextual information from textual data for event forecasting poses significant challenges due to the intricate structure of the documents and the evolving nature of events. Recently, dynamic Graph Neural Networks (GNNs) have been introduced to capture the dynamic patterns of input text graphs. However, these models only utilize node-level representation, causing the loss of the global information from graph-level representation. On the other hand, both node-level and graph-level representations are essential for effective event prediction as node-level representation gives insight into the local structure, and the graph-level representation provides an understanding of the global structure of the temporal graph. To address these challenges, in this paper, we propose a Dynamic Graph Contrastive Learning (DyGCL) method for event prediction. Our model DyGCL employs a local view encoder to learn the evolving node representations, which effectively captures the local dynamic structure of input graphs. Additionally, it harnesses a global view encoder to perceive the hierarchical dynamic graph representation of the input graphs. Then we update the graph representations from both encoders using contrastive learning. In the final stage, DyGCL combines both representations using an attention mechanism and optimizes its capability to predict future events. Our extensive experiment demonstrates that our proposed method outperforms the baseline methods for event prediction on six real-world datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. N. Ramakrishnan, P. Butler, S. Muthiah, N. Self, R. Khandpur, P. Saraf, W. Wang, J. Cadena, A. Vullikanti, G. Korkmaz et al., “’beating the news’ with embers: forecasting civil unrest using open source indicators,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 1799–1808.
  2. S. Deng, S. Wang, H. Rangwala, L. Wang, and Y. Ning, “Cola-gnn: Cross-location attention based graph neural networks for long-term ili prediction,” in Proceedings of the 29th ACM international conference on information & knowledge management, 2020, pp. 245–254.
  3. A. Signorini, A. M. Segre, and P. M. Polgreen, “The use of twitter to track levels of disease activity and public concern in the us during the influenza a h1n1 pandemic,” PloS one, vol. 6, no. 5, p. e19467, 2011.
  4. X. Wang, M. S. Gerber, and D. E. Brown, “Automatic crime prediction using events extracted from twitter posts.” SBP, vol. 12, pp. 231–238, 2012.
  5. S. Deng, H. Rangwala, and Y. Ning, “Learning dynamic context graphs for predicting social events,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1007–1016.
  6. M. Kosan, A. Silva, S. Medya, B. Uzzi, and A. Singh, “Event detection on dynamic graphs,” arXiv preprint arXiv:2110.12148, 2021.
  7. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  8. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  9. S. Gidaris, P. Singh, and N. Komodakis, “Unsupervised representation learning by predicting image rotations,” arXiv preprint arXiv:1803.07728, 2018.
  10. M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by solving jigsaw puzzles,” in European conference on computer vision.   Springer, 2016, pp. 69–84.
  11. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning.   PMLR, 2020, pp. 1597–1607.
  12. N. Lee, J. Lee, and C. Park, “Augmentation-free self-supervised learning on graphs,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, 2022, pp. 7372–7380.
  13. K. Hassani and A. H. Khasahmadi, “Contrastive multi-view representation learning on graphs,” in International conference on machine learning.   PMLR, 2020, pp. 4116–4126.
  14. F.-Y. Sun, J. Hoffman, V. Verma, and J. Tang, “Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=r1lfF2NYvH
  15. S. Tian, R. Wu, L. Shi, L. Zhu, and T. Xiong, “Self-supervised representation learning on dynamic graphs,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1814–1823.
  16. Y. Xu, B. Shi, T. Ma, B. Dong, H. Zhou, and Q. Zheng, “Cldg: Contrastive learning on dynamic graphs,” in 2023 IEEE 39th International Conference on Data Engineering (ICDE), 2023, pp. 696–707.
  17. H. Mueller and C. Rauh, “Reading between the lines: Prediction of political violence using newspaper text,” American Political Science Review, vol. 112, no. 2, pp. 358–375, 2018.
  18. A. Tumasjan, T. Sprenger, P. Sandner, and I. Welpe, “Predicting elections with twitter: What 140 characters reveal about political sentiment,” in Proceedings of the international AAAI conference on web and social media, vol. 4, no. 1, 2010, pp. 178–185.
  19. J. Bollen, H. Mao, and X. Zeng, “Twitter mood predicts the stock market,” Journal of computational science, vol. 2, no. 1, pp. 1–8, 2011.
  20. C. D. Corley, L. L. Pullum, D. M. Hartley, C. Benedum, C. Noonan, P. M. Rabinowitz, and M. J. Lancaster, “Disease prediction models and operational readiness,” PloS one, vol. 9, no. 3, p. e91989, 2014.
  21. Z. Xiao-Jun, “Twitter mood predicts the stock market,” Journal of Computational Science. 1–8., 2011.
  22. Y. Ning, S. Muthiah, H. Rangwala, and N. Ramakrishnan, “Modeling precursors for event forecasting via nested multi-instance learning,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1095–1104.
  23. J. Ma, W. Gao, Z. Wei, Y. Lu, and K.-F. Wong, “Detect rumors using time series of social context information on microblogging websites,” in Proceedings of the 24th ACM international on conference on information and knowledge management, 2015, pp. 1751–1754.
  24. J. Ma, W. Gao, P. Mitra, S. Kwon, B. J. Jansen, K.-F. Wong, and M. Cha, “Detecting rumors from microblogs with recurrent neural networks,” 2016.
  25. Y. Liu and Y.-F. Wu, “Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  26. R. Xia, K. Xuan, and J. Yu, “A state-independent and time-evolving network for early rumor detection in social media,” in Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), 2020, pp. 9042–9051.
  27. P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax,” in International Conference on Learning Representations, 2019. [Online]. Available: https://openreview.net/forum?id=rklz9iAcKQ
  28. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” ser. In Proceedings of the ICLR, 2017.
  29. J. Chen and G. Kou, “Attribute and structure preserving graph contrastive learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, 2023, pp. 7024–7032.
  30. U. Fang, J. Li, N. Akhtar, M. Li, and Y. Jia, “Gomic: Multi-view image clustering via self-supervised contrastive heterogeneous graph co-learning,” World Wide Web, vol. 26, no. 4, pp. 1667–1683, 2023.
  31. J. Zeng and P. Xie, “Contrastive self-supervised learning for graph classification,” in Proceedings of the AAAI conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 824–10 832.
  32. Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” arXiv preprint arXiv:2006.04131, 2020.
  33. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
  34. S. Zhang, Z. Hu, A. Subramonian, and Y. Sun, “Motif-driven contrastive learning of graph representations,” arXiv preprint arXiv:2012.12533, 2020.
  35. J. Lee, I. Lee, and J. Kang, “Self-attention graph pooling,” in International Conference on Machine Learning, vol. 97, 2019.
  36. E. Boschee, J. Lautenschlager, S. O’Brien, S. Shellman, J. Starz, and M. Ward, “Icews coded event data,” 2015. [Online]. Available: https://dataverse.harvard.edu/citation?persistentId=doi:10.7910/DVN/28075
  37. H. Gao and S. Ji, “Graph u-nets,” in international conference on machine learning.   PMLR, 2019, pp. 2083–2092.
  38. Z. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec, “Hierarchical graph representation learning with differentiable pooling,” NeurIPS, vol. 31, 2018.
  39. L. Zhao, Y. Song, C. Zhang, Y. Liu, P. Wang, T. Lin, M. Deng, and H. Li, “T-gcn: A temporal graph convolutional network for traffic prediction,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 9, pp. 3848–3858, 2020.
  40. A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. Schardl, and C. Leiserson, “Evolvegcn: Evolving graph convolutional networks for dynamic graphs,” in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 04, 2020, pp. 5363–5370.
Citations (1)

Summary

We haven't generated a summary for this paper yet.