Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integrate Temporal Graph Learning into LLM-based Temporal Knowledge Graph Model (2501.11911v1)

Published 21 Jan 2025 in cs.IR

Abstract: Temporal Knowledge Graph Forecasting (TKGF) aims to predict future events based on the observed events in history. Recently, LLMs have exhibited remarkable capabilities, generating significant research interest in their application for reasoning over temporal knowledge graphs (TKGs). Existing LLM-based methods have integrated retrieved historical facts or static graph representations into LLMs. Despite the notable performance of LLM-based methods, they are limited by the insufficient modeling of temporal patterns and ineffective cross-modal alignment between graph and language, hindering the ability of LLMs to fully grasp the temporal and structural information in TKGs. To tackle these issues, we propose a novel framework TGL-LLM to integrate temporal graph learning into LLM-based temporal knowledge graph model. Specifically, we introduce temporal graph learning to capture the temporal and relational patterns and obtain the historical graph embedding. Furthermore, we design a hybrid graph tokenization to sufficiently model the temporal patterns within LLMs. To achieve better alignment between graph and language, we employ a two-stage training paradigm to finetune LLMs on high-quality and diverse data, thereby resulting in better performance. Extensive experiments on three real-world datasets show that our approach outperforms a range of state-of-the-art (SOTA) methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. He Chang (3 papers)
  2. Jie Wu (230 papers)
  3. Zhulin Tao (11 papers)
  4. Yunshan Ma (43 papers)
  5. Xianglin Huang (11 papers)
  6. Tat-Seng Chua (360 papers)

Summary

We haven't generated a summary for this paper yet.