Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation (1911.08936v1)

Published 20 Nov 2019 in cs.CL, cs.AI, and cs.LG

Abstract: Graph neural networks (GNNs) have emerged as a powerful paradigm for embedding-based entity alignment due to their capability of identifying isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart entities usually have non-isomorphic neighborhood structures, which easily causes GNNs to yield different representations for them. To tackle this problem, we propose a new KG alignment network, namely AliNet, aiming at mitigating the non-isomorphism of neighborhood structures in an end-to-end manner. As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures. It employs an attention mechanism to highlight helpful distant neighbors and reduce noises. Then, it controls the aggregation of both direct and distant neighborhood information using a gating mechanism. We further propose a relation loss to refine entity representations. We perform thorough experiments with detailed ablation studies and analyses on five entity alignment datasets, demonstrating the effectiveness of AliNet.

Citations (291)

Summary

  • The paper introduces AliNet, a model that aggregates multi-hop neighborhood information with attentive gating to address non-isomorphic structures in knowledge graphs.
  • It combines contrastive alignment loss with a novel relation loss, achieving superior performance on benchmarks using metrics like Hits@1 and MRR.
  • The use of layer-wise representation and neighborhood augmentation techniques provides a robust strategy to mitigate structural heterogeneity in real-world KGs.

The paper "Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation" (Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood Aggregation, 2019) addresses the problem of aligning entities across different knowledge graphs (KGs) which often suffer from schema heterogeneity and data incompleteness. Existing graph neural network (GNN)-based methods for entity alignment, while powerful at identifying isomorphic subgraphs, struggle when the neighborhood structures of counterpart entities are dissimilar (non-isomorphic).

The core contribution of the paper is the proposed AliNet model, designed to mitigate the impact of non-isomorphic neighborhood structures by effectively aggregating multi-hop neighborhood information. The key ideas implemented in AliNet are:

  1. Multi-hop Neighborhood Aggregation: Recognizing that semantically related information might reside in distant neighbors when direct neighbors are dissimilar, AliNet incorporates neighborhood information from beyond the immediate one-hop neighbors. The paper demonstrates this specifically for two-hop neighbors but notes the approach can extend to kk-hops.
  2. Attentive Distant Neighborhood Aggregation: Not all distant neighbors are equally informative; some might introduce noise. To address this, AliNet employs an attention mechanism when aggregating information from distant (e.g., two-hop) neighbors. This attention mechanism uses separate linear transformations for the central entity and its neighbors before computing attention scores, allowing the model to selectively focus on helpful distant neighbors.
  3. Gated Information Combination: A gating mechanism is introduced to control the combination of information aggregated from one-hop and distant (e.g., two-hop) neighbors. This gating allows the model to dynamically weigh the importance of information from different hop distances, adapting to varying levels of structural dissimilarity.
  4. Relation Semantics Modeling: To capture relational information without relying on pre-aligned relations or introducing numerous relation-specific parameters (as in R-GCN), AliNet borrows the translational assumption from TransE. It computes a vector representation for each relation as the average difference between subject and object embeddings across triples involving that relation. A relation loss term is added to the objective function to encourage this translational property, refining the entity embeddings.
  5. Layer-wise Representation Combination: Instead of using only the representation from the final GNN layer, AliNet concatenates and L2L_2-normalizes the representations from all intermediate layers and the input layer to form the final entity embedding. This leverages the fact that representations from different layers capture structural information at varying scales and all contribute to propagating alignment signals.
  6. Neighborhood Augmentation: A heuristic is proposed to add edges between pre-aligned entities across KGs if they are connected in one KG but not the other. This augmentation aims to explicitly reduce non-isomorphism for the seed alignments, facilitating better learning.

Implementation Details:

  • The model optimizes a combined loss function consisting of a contrastive alignment loss (L1\mathcal{L}_1) and the proposed relation loss (L2\mathcal{L}_2). The contrastive loss minimizes the distance between aligned entities and maximizes the distance between unaligned (negative) pairs, with a margin λ\lambda.
  • Negative samples for L1\mathcal{L}_1 are generated by randomly replacing one entity in a pre-aligned pair.
  • The overall objective is L=L1+α2L2\mathcal{L} = \mathcal{L}_1 + \alpha_2 \mathcal{L}_2, where α2\alpha_2 balances the two terms.
  • Optimization is performed using the Adam optimizer. Entity input features are randomly initialized and trained.
  • Neighborhood aggregation utilizes sparse matrix multiplication for efficiency, making the storage complexity linear in the number of entities and triples.
  • For predicting alignment, the L2L_2 distance between the final concatenated entity representations is used, followed by nearest neighbor search (specifically, CSLS was used in experiments).

Experimental Evaluation:

The paper evaluates AliNet on five datasets: DBP15K (ZH-EN, JA-EN, FR-EN) and DWY100K (DBP-WD, DBP-YG), comparing it against various KG embedding and GNN-based entity alignment methods. Performance is measured using Hits@1, Hits@10, and Mean Reciprocal Rank (MRR).

Results show that AliNet consistently outperforms existing state-of-the-art structure-based embedding models, demonstrating the effectiveness of its multi-hop aggregation, attention, and gating mechanisms, as well as the relation loss and neighborhood augmentation. Ablation studies further validate the contribution of each component, showing that both the relation loss and neighborhood augmentation improve performance. The analyses on aggregation strategies highlight that the proposed attentive and gated combination is superior to simpler methods like direct mixing or addition. Experiments on the number of layers and hop distances per layer suggest that two layers, each aggregating up to two hops, provide the best balance between capturing relevant information and avoiding noise from increasingly distant neighbors. Analysis of neighborhood overlap coefficient shows that AliNet is better able to align entities with lower structural similarity in their direct neighborhoods compared to simpler GCNs.

In essence, AliNet provides a practical framework for robust entity alignment by explicitly modeling and aggregating information from multi-hop neighborhoods in a controlled and attentive manner, thereby effectively handling the structural heterogeneity inherent in real-world KGs.