Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Convolutional 2D Knowledge Graph Embeddings (1707.01476v6)

Published 5 Jul 2017 in cs.LG

Abstract: Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models -- which potentially limits performance. In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set -- however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -- deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.

Citations (2,398)

Summary

  • The paper introduces ConvE, a model that uses 2D convolutions over reshaped embeddings to capture complex relational patterns in knowledge graphs.
  • It implements a 1-N scoring procedure that accelerates training by 3x and evaluation by 300x, significantly optimizing computational efficiency.
  • Results demonstrate that ConvE achieves state-of-the-art performance with up to 17x fewer parameters, ensuring robust link prediction on diverse datasets.

Convolutional 2D Knowledge Graph Embeddings

The paper "Convolutional 2D Knowledge Graph Embeddings" by Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel presents ConvE, a novel model for knowledge graph embedding using 2D convolutions. The goal is to improve link prediction tasks on knowledge graphs by leveraging the expressive power and parameter efficiency of convolutional networks. The paper methodically evaluates the proposed model and compares it against other models on several standard datasets, demonstrating its effectiveness and efficiency.

Knowledge graphs represent data as a set of entities connected by relationships, but they often suffer from missing links. Traditional link prediction models, such as DistMult and R-GCN, often rely on shallow embedding techniques, which are limited in their expressiveness and scalability. ConvE addresses these limitations through a multi-layer approach using 2D convolution, which enables it to capture more complex interactions within the data while maintaining parameter efficiency.

Key Contributions

1. Introduction of ConvE:

ConvE uses simple 2D convolutions over embeddings for link prediction. Unlike most neural link predictors that use one-dimensional embeddings, ConvE reshapes these embeddings into two dimensions and applies convolutional filters, followed by a projection back into the embedding space. This structure is advantageous because it increases expressiveness through multiple layers while maintaining parameter efficiency.

2. 1-N Scoring Procedure:

The paper introduces a 1-N scoring procedure that significantly accelerates both training and evaluation processes. By scoring a pair (subject, relation) against all possible objects simultaneously, they achieve a three-fold speed-up in training and a 300x speed-up in evaluation.

3. Parameter Efficiency:

ConvE achieves the same or better performance than traditional models like DistMult and R-GCN while using 8x and 17x fewer parameters respectively. This is particularly beneficial for large-scale knowledge graphs where memory and computational resources are limited.

4. Robust Data Evaluation:

The authors identify and mitigate test set leakage in existing datasets like WN18 and FB15k, where inverse relations from the training set appear in the test set. They introduce robust versions of datasets (WN18RR and additional validations on existing datasets) that are free from such biases, ensuring reliable evaluation of link prediction models.

Experimental Results

Analyzing the performance of ConvE across commonly used datasets for link prediction, the model achieves state-of-the-art Mean Reciprocal Rank (MRR) scores:

  • WN18 and FB15k: ConvE outperforms existing models, but the paper also reveals that simple rule-based models exploiting inverse relation leakage can achieve unrealistically high scores on these datasets.
  • Robust Datasets (WN18RR and FB15k-237): ConvE obtains state-of-the-art results in more realistic settings, confirming the model's robustness and practical applicability.
  • YAGO3-10: ConvE demonstrates significant performance improvements on this dataset, underscoring its capability to handle complex, highly connected graphs.
  • Countries Dataset: ConvE excels in tasks designed to evaluate long-range dependency modeling, further validating its expressiveness.

Implications and Future Directions

The introduction of ConvE has several practical and theoretical implications:

Practical Implications:

ConvE's efficiency in terms of parameters and computational speed makes it highly suitable for real-world applications involving large-scale knowledge graphs. Its robustness against test set leakage ensures reliable performance evaluation, paving the way for its adoption in industrial settings where data integrity is paramount.

Theoretical Implications:

ConvE demonstrates that deeper architectures with convolutional layers can significantly enhance the modeling of knowledge graphs compared to shallow models. This challenges the prevailing assumption that scalability in link prediction necessarily requires simplicity in model structure.

Future Developments:

One promising direction is extending ConvE's architecture to incorporate deeper convolutional layers, akin to advancements in computer vision, to further capture intricate interactions within knowledge graphs. Additionally, exploring higher-dimensional convolutions could unlock even greater expressiveness and modeling capabilities. Continued investigation into robust evaluation methodologies will also be critical to advancing this research field.

Conclusion

ConvE represents a substantial advancement in knowledge graph embedding by combining the power of 2D convolutions with efficient parameter usage and robust evaluation techniques. The model not only outperforms existing methods on several benchmarks but also introduces significant methodological improvements for the field. As such, ConvE stands as a robust, efficient, and highly expressive model for link prediction tasks in knowledge graphs.