Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowlege Graph Embedding by Flexible Translation (1505.05253v2)

Published 20 May 2015 in cs.CL

Abstract: Knowledge graph embedding refers to projecting entities and relations in knowledge graph into continuous vector spaces. State-of-the-art methods, such as TransE, TransH, and TransR build embeddings by treating relation as translation from head entity to tail entity. However, previous models can not deal with reflexive/one-to-many/many-to-one/many-to-many relations properly, or lack of scalability and efficiency. Thus, we propose a novel method, flexible translation, named TransF, to address the above issues. TransF regards relation as translation between head entity vector and tail entity vector with flexible magnitude. To evaluate the proposed model, we conduct link prediction and triple classification on benchmark datasets. Experimental results show that our method remarkably improve the performance compared with several state-of-the-art baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jun Feng (55 papers)
  2. Mantong Zhou (5 papers)
  3. Yu Hao (32 papers)
  4. Minlie Huang (226 papers)
  5. Xiaoyan Zhu (54 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.