Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chain of History: Learning and Forecasting with LLMs for Temporal Knowledge Graph Completion (2401.06072v2)

Published 11 Jan 2024 in cs.AI and cs.CL
Chain of History: Learning and Forecasting with LLMs for Temporal Knowledge Graph Completion

Abstract: Temporal Knowledge Graph Completion (TKGC) is a complex task involving the prediction of missing event links at future timestamps by leveraging established temporal structural knowledge. This paper aims to provide a comprehensive perspective on harnessing the advantages of LLMs for reasoning in temporal knowledge graphs, presenting an easily transferable pipeline. In terms of graph modality, we underscore the LLMs' prowess in discerning the structural information of pivotal nodes within the historical chain. As for the generation mode of the LLMs utilized for inference, we conduct an exhaustive exploration into the variances induced by a range of inherent factors in LLMs, with particular attention to the challenges in comprehending reverse logic. We adopt a parameter-efficient fine-tuning strategy to harmonize the LLMs with the task requirements, facilitating the learning of the key knowledge highlighted earlier. Comprehensive experiments are undertaken on several widely recognized datasets, revealing that our framework exceeds or parallels existing methods across numerous popular metrics. Additionally, we execute a substantial range of ablation experiments and draw comparisons with several advanced commercial LLMs, to investigate the crucial factors influencing LLMs' performance in structured temporal knowledge inference tasks.

Overview of Temporal Knowledge Graph Completion

Temporal Knowledge Graphs (TKGs) are vital for understanding how entities and their relationships evolve over time. Predicting missing event links in TKGs, a task known as Temporal Knowledge Graph Completion (TKGC), can be quite complex due to the dynamic nature of TKGs. This paper introduces an innovative approach that reimagines the TKGC challenge by utilizing the capabilities of LLMs.

Methodology Employed in Research

The authors' method hinges on fine-tuning LLMs to better adapt to the temporal sequence prediction task. They fine-tune the models on known data from TKGs to enhance event generation. The process of fine-tuning includes structuring the LLM's knowledge of past events, which helps it learn relationships and patterns necessary for forecasting future ones. The researchers also propose data augmentation techniques and the integration of "reverse logic" to make the LLMs more effective at identifying the subtleties within structural knowledge.

Analysis and Experimental Results

Experiments comparing LLMs fine-tuned with these novel approaches to existing methods show promising results. On several TKGC benchmarks, the models outperformed current methods, achieving state-of-the-art performance. The experiments included various metrics and considered how factors like the amount of historical information available and the size of the LLM impact the model's performance in predicting temporal graph links.

Impact of Research and Future Work

The paper demonstrates the potential of using LLMs not just as tools for natural language processing but as predictive models for temporal reasoning in knowledge graphs. The findings suggest that the predictive capabilities of LLMs can be significantly enhanced when equipped with structured, temporal, historical data. This research opens avenues to further apply LLMs to complex tasks in various domains and lays groundwork for future advancements in AI's ability to understand and predict the temporal dynamics of knowledge graphs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Qwen technical report. CoRR, abs/2309.16609.
  2. Simple, scalable adaptation for neural machine translation.
  3. The reversal curse: Llms trained on "a is b" fail to learn "b is a". CoRR, abs/2309.12288.
  4. Gpt-neox-20b: An open-source autoregressive language model. CoRR, abs/2204.06745.
  5. Temporal knowledge graph completion: A survey. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pages 6545–6553. ijcai.org.
  6. Graphllm: Boosting graph reasoning ability of large language model.
  7. Pairre: Knowledge graph embeddings via paired relation vectors. In ACL/IJCNLP(1), pages 4360–4369.
  8. Exploring the potential of large language models (llms) in learning on graphs.
  9. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555.
  10. Temporal knowledge graph forecasting with neural ODE. CoRR, abs/2101.05151.
  11. Knowledge solver: Teaching llms to search for domain knowledge from knowledge graphs. CoRR, abs/2309.03118.
  12. Learning sequence encoders for temporal knowledge graph completion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4816–4821. Association for Computational Linguistics.
  13. Diachronic embedding for temporal knowledge graph completion. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 3988–3995. AAAI Press.
  14. Gpt4graph: Can large language models understand graph structured data ? an empirical evaluation and benchmarking.
  15. Explainable subgraph reasoning for forecasting on temporal knowledge graphs. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
  16. Learning neural ordinary equations for forecasting future links on temporal knowledge graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8352–8364. Association for Computational Linguistics.
  17. Towards a unified view of parameter-efficient transfer learning.
  18. Parameter-efficient transfer learning for nlp.
  19. Lora: Low-rank adaptation of large language models.
  20. Temporal knowledge base completion: New algorithms and evaluation protocols. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3733–3747. Association for Computational Linguistics.
  21. Recurrent event network: Autoregressive structure inferenceover temporal knowledge graphs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6669–6683. Association for Computational Linguistics.
  22. Tensor decompositions for temporal knowledge base completion. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
  23. Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In Companion Proceedings of the The Web Conference 2018, pages 1771–1776.
  24. Temporal knowledge graph forecasting without knowledge using in-context learning. arXiv preprint arXiv:2305.10613.
  25. Temporal knowledge graph forecasting without knowledge using in-context learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 544–557. Association for Computational Linguistics.
  26. The power of scale for parameter-efficient prompt tuning.
  27. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online. Association for Computational Linguistics.
  28. Tirgn: Time-guided recurrent graph network with local-global historical patterns for temporal knowledge graph reasoning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 2152–2158. ijcai.org.
  29. Temporal knowledge graph reasoning based on evolutional representation learning. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 408–417. ACM.
  30. Prompt-wnqa: A prompt-based complex question answering for wireless network over knowledge graph. Comput. Networks, 236:110014.
  31. Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 4120–4127. AAAI Press.
  32. Reasoning on graphs: Faithful and interpretable large language model reasoning. CoRR, abs/2310.01061.
  33. Are we falling in a middle-intelligence trap? an analysis and mitigation of the reversal curse. CoRR, abs/2311.07468.
  34. YAGO3: A knowledge base from multilingual wikipedias. In Seventh Biennial Conference on Innovative Data Systems Research, CIDR 2015, Asilomar, CA, USA, January 4-7, 2015, Online Proceedings. www.cidrdb.org.
  35. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.
  36. Training language models to follow instructions with human feedback.
  37. An investigation of llms’ inefficacy in understanding converse relations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 6932–6953. Association for Computational Linguistics.
  38. Learning from hierarchical structure of knowledge graph for recommendation. ACM Trans. Inf. Syst., 42(1):18:1–18:24.
  39. Learning multiple visual domains with residual adapters.
  40. Tucker decomposition-based temporal knowledge graph completion. Knowl. Based Syst., 238:107841.
  41. Graph hawkes transformer for extrapolated reasoning on temporal knowledge graphs. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7481–7493. Association for Computational Linguistics.
  42. Timetraveler: Reinforcement learning for temporal knowledge graph forecasting. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 8306–8319. Association for Computational Linguistics.
  43. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. CoRR, abs/2307.07697.
  44. Rotate: Knowledge graph embedding by relational rotation in complex space. In ICLR.
  45. Graphgpt: Graph instruction tuning for large language models. CoRR, abs/2310.13023.
  46. Graph neural prompting with large language models. CoRR, abs/2309.15427.
  47. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971.
  48. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3462–3471. PMLR.
  49. Vicuna. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. https://vicuna.lmsys.org/.
  50. Simkgc: Simple contrastive knowledge graph completion with pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 4281–4294. Association for Computational Linguistics.
  51. Language models as knowledge embeddings. CoRR, abs/2206.12617.
  52. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
  53. One-shot relational learning for knowledge graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1980–1990. Association for Computational Linguistics.
  54. Triplere: Knowledge graph embeddings via tripled relation vectors. CoRR, abs/2209.08271.
  55. Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 4732–4740. AAAI Press.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ruilin Luo (9 papers)
  2. Tianle Gu (14 papers)
  3. Haoling Li (13 papers)
  4. Junzhe Li (5 papers)
  5. Zicheng Lin (7 papers)
  6. Jiayi Li (62 papers)
  7. Yujiu Yang (155 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets