Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CompoundE: Knowledge Graph Embedding with Translation, Rotation and Scaling Compound Operations (2207.05324v1)

Published 12 Jul 2022 in cs.AI, cs.CL, and cs.LG

Abstract: Translation, rotation, and scaling are three commonly used geometric manipulation operations in image processing. Besides, some of them are successfully used in developing effective knowledge graph embedding (KGE) models such as TransE and RotatE. Inspired by the synergy, we propose a new KGE model by leveraging all three operations in this work. Since translation, rotation, and scaling operations are cascaded to form a compound one, the new model is named CompoundE. By casting CompoundE in the framework of group theory, we show that quite a few scoring-function-based KGE models are special cases of CompoundE. CompoundE extends the simple distance-based relation to relation-dependent compound operations on head and/or tail entities. To demonstrate the effectiveness of CompoundE, we conduct experiments on three popular KG completion datasets. Experimental results show that CompoundE consistently achieves the state of-the-art performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiou Ge (13 papers)
  2. Yun-Cheng Wang (17 papers)
  3. Bin Wang (750 papers)
  4. C. -C. Jay Kuo (176 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.