Learnability of the sign-decomposition solution via gradient-based optimization
Ascertain whether gradient-based optimization in practical training regimes can efficiently learn parameter settings in knowledge graph embedding models that achieve the exact sign-decomposition Y = s(H E^T) (with s applied element-wise, H ∈ R^{|E||R| × (2c+1)}, E ∈ R^{|E| × (2c+1)}, and c the maximum out-degree), thereby realizing perfect sign and ranking reconstruction for (s,r,?) queries.
Sponsor
References
Note that two practical questions remain. First, can existing KGEs, with their specific scoring functions, actually represent the factorisation technique described in Theorem \ref{thm:exact_sign_decomposition_kge}? Second, even if they can, can this solution be efficiently learned through gradient-based optimisation in practice? We leave these for future work.