Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion (1811.04441v2)

Published 11 Nov 2018 in cs.AI and cs.CL

Abstract: Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end Structure-Aware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-the-art ConvE in terms of HITS@1, HITS@3 and HITS@10.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chao Shang (24 papers)
  2. Yun Tang (42 papers)
  3. Jing Huang (140 papers)
  4. Jinbo Bi (28 papers)
  5. Xiaodong He (162 papers)
  6. Bowen Zhou (141 papers)
Citations (516)

Summary

End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion

The paper presents an advanced approach to knowledge base (KB) completion by introducing the End-to-end Structure-Aware Convolutional Network (SACN). This method effectively combines the strengths of Graph Convolutional Networks (GCNs) and convolutional network architectures, particularly leveraging the benefits of both ConvE and GCN to enhance the performance of knowledge base embedding.

Core Contributions

Integration of GCN and Convolutional Techniques

SACN capitalizes on the graph connectivity structure inherent in GCNs, paired with the convolutional capabilities of ConvE. This synthesis aims to improve the representation of entities and relations within a knowledge graph. This advancement is achieved via a distinct two-part model:

  • Encoder (WGCN): A weighted graph convolutional network that enhances node representation by aggregating information from the node's neighbors. The WGCN incorporates learnable weights to dynamically adjust the influence of these neighboring nodes.
  • Decoder (Conv-TransE): A convolutional architecture that retains the translational properties of TransE while performing convolutional operations directly, thereby aiming to boost prediction accuracy without altering the predictive power observed with ConvE.

Enhanced Performance

In numerical experiments presented, SACN exhibited a significant improvement over the state-of-the-art as operationalized by ConvE, evidenced by a relative increment of approximately 10% across various metrics such as HITS@1, HITS@3, and HITS@10 on standard datasets FB15k-237 and WN18RR. This enhancement underscores the efficacy of integrating structural awareness in node embedding generation.

Methodological Innovations

SACN introduces several nuanced methodological adjustments:

  • Removal of Input Reshaping in Conv-TransE: By obviating ConvE’s reshaping step, Conv-TransE maintains the original dimensional relationship between entity and relation embeddings, preserving translational properties throughout convolution.
  • Incorporation of Node Attributes in WGCN: SACN innovatively represents attribute information as additional nodes in the graph. This process ensures that node attributes augment rather than distort the embedding space.
  • Relation-Specific Edge Weighting: The WGCN component allows for adaptive learning of edge weights based on relation types, refining the convolution process to better respect the knowledge graph's relational structure.

Implications and Future Directions

From a practical standpoint, the SACN model enhances the ability to predict missing information and complete knowledge bases efficiently. Theoretically, it presents a novel framework for integrating structural graph information with convolutional methods in KG embedding.

Looking forward, several research avenues are noteworthy:

  • Scalability: Extending SACN to scale with larger, more complex knowledge graphs would significantly bolster its utility in real-world applications like web-scale recommendation systems.
  • Neighbour Selection Enhancement: Incorporating methodologies to strategically select relevant information from nodes' neighbors could further refine the embedding process, potentially enhancing SACN’s performance even in highly dense graph regions.

In conclusion, the SACN proposal not only sets a new benchmark for knowledge base completion performance but also opens pathways for more structurally integrated deep learning models in the field of knowledge graph embeddings.