Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMCEN: An Attention Masking-based Contrastive Event Network for Two-stage Temporal Knowledge Graph Reasoning (2405.10346v1)

Published 16 May 2024 in cs.LG and cs.AI

Abstract: Temporal knowledge graphs (TKGs) can effectively model the ever-evolving nature of real-world knowledge, and their completeness and enhancement can be achieved by reasoning new events from existing ones. However, reasoning accuracy is adversely impacted due to an imbalance between new and recurring events in the datasets. To achieve more accurate TKG reasoning, we propose an attention masking-based contrastive event network (AMCEN) with local-global temporal patterns for the two-stage prediction of future events. In the network, historical and non-historical attention mask vectors are designed to control the attention bias towards historical and non-historical entities, acting as the key to alleviating the imbalance. A local-global message-passing module is proposed to comprehensively consider and capture multi-hop structural dependencies and local-global temporal evolution for the in-depth exploration of latent impact factors of different event types. A contrastive event classifier is used to classify events more accurately by incorporating local-global temporal patterns into contrastive learning. Therefore, AMCEN refines the prediction scope with the results of the contrastive event classification, followed by utilizing attention masking-based decoders to finalize the specific outcomes. The results of our experiments on four benchmark datasets highlight the superiority of AMCEN. Especially, the considerable improvements in Hits@1 prove that AMCEN can make more precise predictions about future occurrences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (60)
  1. S. Ji, S. Pan et al., “A survey on knowledge graphs: Representation, acquisition, and applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 2, pp. 494–514, 2021.
  2. L. Yang, X. Wang et al., “HackRL: Reinforcement learning with hierarchical attention for cross-graph knowledge fusion and collaborative reasoning,” Knowl. Based Syst., vol. 233, p. 107498, 2021.
  3. ——, “HackGAN: Harmonious cross-network mapping using cyclegan with wasserstein–procrustes learning for unsupervised network alignment,” IEEE Trans. Comput. Soc. Syst., vol. 10, no. 2, pp. 746–759, 2022.
  4. L. Yang, C. Lv et al., “Collective entity alignment for knowledge fusion of power grid dispatching knowledge graphs,” IEEE/CAA J. Autom. Sin., vol. 9, no. 11, pp. 1990–2004, 2022.
  5. E. Boschee, J. Lautenschlager et al., “ICEWS Coded Event Data,” 2015. [Online]. Available: https://doi.org/10.7910/DVN/28075
  6. L. Su, Z. Wang et al., “A survey based on knowledge graph in fault diagnosis, analysis and prediction: key technologies and challenges,” in Int. Conf. Artif. Intell. Comput. Eng., ICAICE.   IEEE, 2020, pp. 458–462.
  7. Q. Guo, F. Zhuang et al., “A survey on knowledge graph-based recommender systems,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 8, pp. 3549–3568, 2020.
  8. T. Jiang, T. Liu et al., “Towards time-aware knowledge graph completion,” in Int. Conf. Comput. Linguist., Proc. COLING : Tech. Papers, 2016, pp. 1715–1724.
  9. S. S. Dasgupta, S. N. Ray et al., “HyTE: Hyperplane-based temporally aware knowledge graph embedding.” in Proc. Conf. Empir. Methods Nat. Lang. Process., EMNLP, 2018, pp. 2001–2011.
  10. A. García-Durán, S. Dumančić et al., “Learning sequence encoders for temporal knowledge graph completion,” in Proc. Conf. Empir. Methods Nat. Lang. Process., EMNLP, 2018.
  11. A. Bordes, N. Usunier et al., “Translating embeddings for modeling multi-relational data,” Adv. Neural inf. process. syst., vol. 26, 2013.
  12. Z. Wang, J. Zhang et al., “Knowledge graph embedding by translating on hyperplanes,” in Proc. Natl. Conf. Artif. Intell., vol. 28, no. 1, 2014.
  13. G. Ji, S. He et al., “Knowledge graph embedding via dynamic mapping matrix,” in Annu. Meet. Assoc. Comput. Linguist. Int. Jt. Conf. Nat. Lang. Process. Asian Fed. Nat. Lang. Process., Proc. Conf., 2015, pp. 687–696.
  14. M. Nickel, V. Tresp et al., “A three-way model for collective learning on multi-relational data.” in Int. Conf. Mach. Learn., ICML, vol. 11, no. 10.5555, 2011, pp. 3 104 482–3 104 584.
  15. B. Yang, W.-t. Yih et al., “Embedding entities and relations for learning and inference in knowledge bases,” in Int. Conf. Learn. Represent., ICLR - Conf. Track Proc., 2015.
  16. T. Trouillon, J. Welbl et al., “Complex embeddings for simple link prediction,” in Int. Conf. Mach. Learn., ICML.   PMLR, 2016, pp. 2071–2080.
  17. T. Dettmers, P. Minervini et al., “Convolutional 2D knowledge graph embeddings,” in Proc. Natl. Conf. Artif. Intell., vol. 32, no. 1, 2018.
  18. D. Q. Nguyen, T. D. Nguyen et al., “A novel embedding model for knowledge base completion based on convolutional neural network,” in Conf. N. Am. Chapter Assoc. Comput. Linguistics: Hum. Lang. Technol. - Proc. Conf., 2017.
  19. M. Schlichtkrull, T. N. Kipf et al., “Modeling relational data with graph convolutional networks,” in Lect. Notes Comput. Sci.   Springer, 2018, pp. 593–607.
  20. D. Nathani, J. Chauhan et al., “Learning attention-based embeddings for relation prediction in knowledge graphs,” in Annu. Meet. Assoc. Comput. Linguist., Proc. Conf., 2019.
  21. S. Vashishth, S. Sanyal et al., “Composition-based multi-relational graph convolutional networks,” in Int. Conf. Learn. Represent., ICLR, 2019.
  22. J. Wu, M. Cao et al., “Temp: Temporal message passing for temporal knowledge graph completion,” in Conf. Empir. Methods Nat. Lang. Process., Proc. Conf., 2020.
  23. W. Jin, M. Qu et al., “Recurrent event network: Autoregressive structure inference over temporal knowledge graphs,” in Conf. Empir. Methods Nat. Lang. Process., Proc. Conf., 2019.
  24. Y. He, P. Zhang et al., “HIP network: Historical information passing network for extrapolation reasoning on temporal knowledge graph,” in IJCAI Int. Joint Conf. Artif. Intell., 2021, pp. 1915–1921.
  25. Y. Li, S. Sun et al., “TiRGN: time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning,” in IJCAI Int. Joint Conf. Artif. Intell.   ijcai. org, 2022, pp. 2152–2158.
  26. Z. Li, X. Jin et al., “Temporal knowledge graph reasoning based on evolutional representation learning,” in Proc. Int. ACM SIGIR Conf. Res. Dev. Inf. Retr., 2021, pp. 408–417.
  27. C. Zhu, M. Chen et al., “Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks,” in Proc. Natl. Conf. Artif. Intell., vol. 35, no. 5, 2021, pp. 4732–4740.
  28. Y. Xu, J. Ou et al., “Temporal knowledge graph reasoning with historical contrastive learning,” in Proc. Natl. Conf. Artif. Intell., 2023.
  29. K. Liu, F. Zhao et al., “DA-Net: Distributed attention network for temporal knowledge graph reasoning,” in Int. Conf. Inf. Knowledge Manage., 2022, pp. 1289–1298.
  30. A. Vaswani, N. Shazeer et al., “Attention is all you need,” Adv. Neural inf. process. syst., vol. 30, 2017.
  31. K. Hassani and A. H. Khasahmadi, “Contrastive multi-view representation learning on graphs,” in Int. Conf. Mach. Learn., ICML.   PMLR, 2020, pp. 4116–4126.
  32. N. Lao and W. W. Cohen, “Relational retrieval using a combination of path-constrained random walks,” Mach. Learn., vol. 81, pp. 53–67, 2010.
  33. M. Gardner, P. Talukdar et al., “Incorporating vector space similarity in random walk inference over knowledge bases,” in Conf. Empir. Methods Nat. Lang. Process., Proc. Conf., 2014, pp. 397–406.
  34. W. Xiong, T. Hoang et al., “Deeppath: A reinforcement learning method for knowledge graph reasoning,” in Conf. Empir. Methods Nat. Lang. Process., Proc., 2017.
  35. C. Meilicke, M. Fink et al., “Fine-grained evaluation of rule-and embedding-based systems for knowledge graph completion,” in Lect. Notes Comput. Sci.   Springer, 2018, pp. 3–20.
  36. L. A. Galárraga, C. Teflioudi et al., “AMIE: association rule mining under incomplete evidence in ontological knowledge bases,” in Proc. Int. Conf. World Wide Web, 2013, pp. 413–422.
  37. S. Guo, Q. Wang et al., “Jointly embedding knowledge graphs and logical rules,” in Conf. Empir. Methods Nat. Lang. Process., Proc., 2016, pp. 192–202.
  38. A. Sankar, Y. Wu et al., “Dysat: Deep neural representation learning on dynamic graphs via self-attention networks,” in Proc. Int. Conf. Web Search Data Min., 2020, pp. 519–527.
  39. H. Sun, J. Zhong et al., “TimeTraveler: Reinforcement learning for temporal knowledge graph forecasting,” in Conf. Empir. Methods Nat. Lang. Process., Proc., 2021.
  40. Z. Han, P. Chen et al., “Explainable subgraph reasoning for forecasting on temporal knowledge graphs,” in Int. Conf. Learn. Represent., 2021.
  41. Z. Li, S. Guan et al., “Complex evolutional pattern learning for temporal knowledge graph reasoning,” in Proc. Annu. Meet. Assoc. Comput Linguist., 2022.
  42. V. Mnih, N. Heess et al., “Recurrent models of visual attention,” Adv. Neural inf. process. syst., vol. 27, 2014.
  43. J. Devlin, M.-W. Chang et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Conf. N. Am. Chapter Assoc. Comput. Linguistics: Hum. Lang. Technol. - Proc. Conf., vol. 1, 2019, p. 2.
  44. A. Jaiswal, A. R. Babu et al., “A survey on contrastive self-supervised learning,” Technologies, vol. 9, no. 1, p. 2, 2020.
  45. T. Chen, S. Kornblith et al., “A simple framework for contrastive learning of visual representations,” in Int. Conf. Mach. Learn., ICML.   PMLR, 2020, pp. 1597–1607.
  46. Y. Liu, W. Tu et al., “Deep graph clustering via dual correlation reduction,” in Proc. Natl. Conf. Artif. Intell., vol. 36, no. 7, 2022, pp. 7603–7611.
  47. M. U. Gutmann and A. Hyvärinen, “Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.” J. Mach. Learn. Res., vol. 13, no. 2, 2012.
  48. Y. Pan, J. Liu et al., “Learning first-order rules with relational path contrast for inductive relation reasoning,” arXiv preprint arXiv:2110.08810, 2021.
  49. X. Xu, P. Zhang et al., “Subgraph neighboring relations infomax for inductive link prediction on knowledge graphs,” in IJCAI Int. Joint Conf. Artif. Intell., 2022.
  50. B. Yang, W.-t. Yih et al., “Embedding entities and relations for learning and inference in knowledge bases,” in Int. Conf. Learn. Represent., 2015.
  51. M. Nickel, L. Rosasco et al., “Holographic embeddings of knowledge graphs,” in Proc. Natl. Conf. Artif. Intell., vol. 30, no. 1, 2016.
  52. K. Leetaru and P. A. Schrodt, “GDELT: Global data on events, location, and tone, 1979–2012,” in ISA Annual Convention, vol. 2, no. 4.   Citeseer, 2013, pp. 1–49.
  53. J. Leblay and M. W. Chekol, “Deriving validity time in knowledge graph,” in Companion World Wide Web Conf., WWW, 2018, pp. 1771–1776.
  54. F. Mahdisoltani, J. Biega et al., “YAGO3: A knowledge base from multilingual wikipedias,” in Bienn. Conf. Innov. Data Syst. Res.   CIDR Conference, 2014.
  55. Z. Sun, Z.-H. Deng et al., “RotatE: Knowledge graph embedding by relational rotation in complex space,” in Int. Conf. Learn. Represent., ICLR, 2019.
  56. J. Lu, C. Guo et al., “The ChatGPT after: Opportunities and challenges of very large scale pre-trained models,” Acta Autom. Sin., vol. 49, no. 4, pp. 705–717, 2023.
  57. J. Wei, X. Wang et al., “Chain-of-thought prompting elicits reasoning in large language models,” Adv. Neural inf. process. syst., vol. 35, pp. 24 824–24 837, 2022.
  58. X. Li, P. Ye et al., “From features engineering to scenarios engineering for trustworthy AI: I&I, C&C, and V&V,” IEEE Intell. Syst., vol. 37, no. 4, pp. 18–26, 2022.
  59. J. Yang, S. Li et al., “DeFACT in ManuVerse for parallel manufacturing: Foundation models and parallel workers in smart factories,” IEEE Trans. Syst. Man Cybern. Syst., vol. 53, no. 4, pp. 2188–2199, 2023.
  60. X. Li, Y. Tian et al., “A novel scenarios engineering methodology for foundation models in metaverse,” IEEE Trans. Syst. Man Cybern. Syst., vol. 53, no. 4, pp. 2148–2159, 2022.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com