Representability of the sign-decomposition factorization by existing KGE scoring functions
Determine whether standard knowledge graph embedding models that score triples via dot products of the form φ(s,r,o) = h_{s,r}^T e_o can represent the sign-decomposition factorization Y = s(H E^T) described in Theorem 4.1, where Y ∈ {0,1}^{|E||R| × |E|} is the adjacency matrix, s(x) = 1 if x > 0 and 0 otherwise is applied element-wise, H ∈ R^{|E||R| × (2c+1)}, E ∈ R^{|E| × (2c+1)}, and c is the maximum out-degree across subject–relation pairs.
Sponsor
References
Note that two practical questions remain. First, can existing KGEs, with their specific scoring functions, actually represent the factorisation technique described in Theorem \ref{thm:exact_sign_decomposition_kge}? Second, even if they can, can this solution be efficiently learned through gradient-based optimisation in practice? We leave these for future work.