Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PPKE: Knowledge Representation Learning by Path-based Pre-training (2012.03573v1)

Published 7 Dec 2020 in cs.CL and cs.AI

Abstract: Entities may have complex interactions in a knowledge graph (KG), such as multi-step relationships, which can be viewed as graph contextual information of the entities. Traditional knowledge representation learning (KRL) methods usually treat a single triple as a training unit, and neglect most of the graph contextual information exists in the topological structure of KGs. In this study, we propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE, which aims to integrate more graph contextual information between entities into the KRL model. Experiments demonstrate that our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks, indicating that our model provides a feasible way to take advantage of graph contextual information in KGs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Bin He (58 papers)
  2. Di Zhou (60 papers)
  3. Jing Xie (17 papers)
  4. Jinghui Xiao (9 papers)
  5. Xin Jiang (242 papers)
  6. Qun Liu (230 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.