Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Graph Large Language Model (KG-LLM) for Link Prediction (2403.07311v8)

Published 12 Mar 2024 in cs.CL and cs.LG

Abstract: The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph LLM (KG-LLM), a novel framework that leverages LLMs for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework's potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models' generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Dong Shu (16 papers)
  2. Tianle Chen (14 papers)
  3. Mingyu Jin (38 papers)
  4. Mengnan Du (90 papers)
  5. Yongfeng Zhang (163 papers)
  6. Chong Zhang (137 papers)
Citations (14)
Youtube Logo Streamline Icon: https://streamlinehq.com