Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Parallel Translating Embedding For Knowledge Graphs (1703.10316v4)

Published 30 Mar 2017 in cs.AI

Abstract: Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces. Translating embedding methods regard relations as the translation from head entities to tail entities, which achieve the state-of-the-art results among knowledge graph embedding methods. However, a major limitation of these methods is the time consuming training process, which may take several days or even weeks for large knowledge graphs, and result in great difficulty in practical applications. In this paper, we propose an efficient parallel framework for translating embedding methods, called ParTrans-X, which enables the methods to be paralleled without locks by utilizing the distinguished structures of knowledge graphs. Experiments on two datasets with three typical translating embedding methods, i.e., TransE [3], TransH [17], and a more efficient variant TransE- AdaGrad [10] validate that ParTrans-X can speed up the training process by more than an order of magnitude.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Denghui Zhang (33 papers)
  2. Manling Li (47 papers)
  3. Yantao Jia (14 papers)
  4. Yuanzhuo Wang (16 papers)
  5. Xueqi Cheng (274 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.