Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Principled Representation Learning for Entity Alignment (2110.10871v1)

Published 21 Oct 2021 in cs.CL and cs.AI

Abstract: Embedding-based entity alignment (EEA) has recently received great attention. Despite significant performance improvement, few efforts have been paid to facilitate understanding of EEA methods. Most existing studies rest on the assumption that a small number of pre-aligned entities can serve as anchors connecting the embedding spaces of two KGs. Nevertheless, no one investigates the rationality of such an assumption. To fill the research gap, we define a typical paradigm abstracted from existing EEA methods and analyze how the embedding discrepancy between two potentially aligned entities is implicitly bounded by a predefined margin in the scoring function. Further, we find that such a bound cannot guarantee to be tight enough for alignment learning. We mitigate this problem by proposing a new approach, named NeoEA, to explicitly learn KG-invariant and principled entity embeddings. In this sense, an EEA model not only pursues the closeness of aligned entities based on geometric distance, but also aligns the neural ontologies of two KGs by eliminating the discrepancy in embedding distribution and underlying ontology knowledge. Our experiments demonstrate consistent and significant improvement in performance against the best-performing EEA methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Lingbing Guo (27 papers)
  2. Zequn Sun (32 papers)
  3. Mingyang Chen (45 papers)
  4. Wei Hu (309 papers)
  5. Qiang Zhang (466 papers)
  6. Huajun Chen (198 papers)

Summary

We haven't generated a summary for this paper yet.