Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TuckER: Tensor Factorization for Knowledge Graph Completion (1901.09590v2)

Published 28 Jan 2019 in cs.LG and stat.ML

Abstract: Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively straightforward but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms previous state-of-the-art models across standard link prediction datasets, acting as a strong baseline for more elaborate models. We show that TuckER is a fully expressive model, derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of TuckER.

Citations (658)

Summary

  • The paper introduces TuckER, a linear model using Tucker decomposition that achieves state-of-the-art link prediction.
  • The methodology represents entities and relations with vectors and a core tensor to capture complex interactions.
  • Experimental results on multiple datasets highlight its efficiency and superiority over models with quadratic parameter growth.

TuckER: Tensor Factorization for Knowledge Graph Completion

Overview

This paper presents TuckER, a linear model for link prediction in knowledge graphs. TuckER leverages Tucker decomposition for tensor factorization, utilizing the binary tensor representation of knowledge graph triples. Entities and relations are represented as vectors, with a core tensor encapsulating their interactions. The model aims to infer missing facts effectively, establishing itself as a robust baseline compared to more complex models.

Model and Contributions

Knowledge graphs are significant in depicting real-world facts through structured triples (es,r,eo)(e_s, r, e_o), where ese_s and eoe_o are entities and rr is the relation connecting them. Despite their informative capabilities, these graphs often lack completeness. TuckER addresses this by predicting missing links using a multimedial tensor completion approach. Key contributions include:

  • Proposal of TuckER: A simple yet expressive model that achieves state-of-the-art results in link prediction tasks.
  • Full Expressiveness: The model guarantees the representation of all possible relations with derived dimensionality bounds.
  • Subsuming Previous Models: TuckER generalizes over previously dominant models like RESCAL, DistMult, ComplEx, and SimplE, offering a unified framework.

Theoretical Insights

TuckER is founded on the expressive power of Tucker decomposition. By employing this method, the model allows parameter sharing via a core tensor, effectively enabling multitask learning across different relations. The model ensures full expressiveness, demonstrated through its ability to capture all entity-relation interactions needed for forecast accuracy.

Numerical Results and Implications

Experimentation across datasets—WN18, WN18RR, FB15k, and FB15k-237—revealed TuckER's superior performance, showcasing its robust predictions, especially for datasets rich in relations. The linear growth of parameters relative to entities and relations proves advantageous over models with quadratic parameter scaling.

TuckER's results highlight an essential advantage of combining simplicity with expressiveness in linear models, challenging the necessity of complex architectures in link prediction. The ability to capture intricate relational dynamics points to potential applications in domains where interpretability and computational efficiency are critical.

Future Directions

The potential expansion of TuckER lies in exploring incorporating domain-specific knowledge and constraints, enhancing its adaptability and application range. Furthermore, continued exploration into reducing computational demands without sacrificing expressiveness remains a promising avenue. Understanding how these interactions contribute to overall performance can yield insights into improving next-generation knowledge graph models.

Conclusion

In conclusion, TuckER provides a compelling alternative to existing knowledge graph completion models. By synthesizing simplicity with depth in the form of a Tucker decomposition, it maintains competitive performance while offering insightful theoretical foundations. This balance suggests pathways to further exploration and refinement in automated inference of factual knowledge.