Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network (1905.11605v3)

Published 28 May 2019 in cs.LG and cs.CL

Abstract: Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs. In this paper, we introduce the topic entity graph, a local sub-graph of an entity, to represent entities with their contextual information in KG. From this view, the KB-alignment task can be formulated as a graph matching problem; and we further propose a graph-attention based solution, which first matches all entities in two topic entity graphs, and then jointly model the local matching information to derive a graph-level matching vector. Experiments show that our model outperforms previous state-of-the-art methods by a large margin.

Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network: A Technical Overview

The paper entitled "Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network" addresses a crucial challenge in the field of multilingual NLP: aligning cross-lingual knowledge graphs (KGs) to bridge the language gap inherent in multilingual datasets. Multilingual KGs like DBpedia and Yago are invaluable resources; however, their lack of cross-lingual links hampers their full utility. This paper presents a novel method to align these KGs more effectively than previous approaches by revamping the task as a graph matching problem.

The authors introduce the concept of the "topic entity graph" to encapsulate the contextual information of an entity within a KG. This model challenges the previous reliance on entity embeddings that hinge solely on monolingual structural information, which often fails when the entities have diverse facts across languages. The proposal redefines the knowledge base (KB)-alignment task as a graph matching challenge, employing a graph-attention-based method focused on integrating local matching information into a broader graph-level matching vector.

Methodology

The core of this work lies in its innovative application of graph convolutional neural networks (GCNs) to encode the structural nuances of topic entity graphs. Here, every entity and its contextual relations in two KGs are encoded separately, forming a list of entity embeddings for each graph. Subsequently, an attentive-matching mechanism cross-evaluates each entity from the first graph against all entities in the second, creating cross-lingual KG-aware matching vectors that elucidate inter-entity relationships within a graph.

An additional GCN layer propagates this local matching data across the graph, yielding a global matching vector that symphonically aligns with entity similarity scores and enhances the prediction accuracy of graph alignment.

Experimental Evaluation

The model's performance is evaluated on the DBP15K dataset, which comprises knowledge graph pairs across different language pairs (e.g., Chinese-English, Japanese-English, and French-English). The results demonstrate that the proposed model surpasses previous state-of-the-art methods significantly, with the paper reporting large improvements in Hit@1 and Hit@10 metrics across all tested language pairs. It underscores the efficacy of integrating both surface form information and KG structural contexts into entity embeddings.

Implications and Future Directions

Practically, this method expands the horizon for multilingual applications of KGs in NLP, facilitating more nuanced and accurate linking of KGs across language barriers. By framing KG alignment as a comprehensive graph matching problem, the paper pushes for further refinement in cross-lingual NLP endeavors, possibly inspiring applications in fields like cross-cultural information retrieval, machine translation, and multilingual semantic search.

Theoretically, this work opens avenues for enhancing graph-based models and neural network design for KGs, suggesting that future research may focus on refining graph embedding techniques and scaling these methods for larger, more complex datasets. Additionally, there is potential to explore its applications in domains requiring entity disambiguation across disparate datasets.

Overall, this paper provides a cohesive framework for cross-lingual KG alignment that is systematically deeper and operationally more robust than its predecessors, contributing a significant tool to the arsenal for handling the language varieties that characterize the modern information landscape.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kun Xu (277 papers)
  2. Liwei Wang (239 papers)
  3. Mo Yu (117 papers)
  4. Yansong Feng (81 papers)
  5. Yan Song (91 papers)
  6. Zhiguo Wang (100 papers)
  7. Dong Yu (329 papers)
Citations (226)