Learnability of the sign-decomposition solution via gradient-based optimization

Ascertain whether gradient-based optimization in practical training regimes can efficiently learn parameter settings in knowledge graph embedding models that achieve the exact sign-decomposition Y = s(H E^T) (with s applied element-wise, H ∈ R^{|E||R| × (2c+1)}, E ∈ R^{|E| × (2c+1)}, and c the maximum out-degree), thereby realizing perfect sign and ranking reconstruction for (s,r,?) queries.

Background

After establishing a theoretical construction for exact sign reconstruction with dimension 2c+1, the paper questions whether such solutions are tractable in practice. Even if the decomposition is representable within a model class, efficient learning through standard gradient-based methods is not guaranteed.

The authors therefore identify the learnability of this factorization—under realistic optimization procedures and losses used in KGE training—as a distinct open question crucial for translating the theoretical bound into practical performance.

References

Note that two practical questions remain. First, can existing KGEs, with their specific scoring functions, actually represent the factorisation technique described in Theorem \ref{thm:exact_sign_decomposition_kge}? Second, even if they can, can this solution be efficiently learned through gradient-based optimisation in practice? We leave these for future work.

Breaking Rank Bottlenecks in Knowledge Graph Embeddings (2506.22271 - Badreddine et al., 27 Jun 2025) in Section 4, A sufficient bound for sign and ranking reconstruction (paragraph beginning “Note that two practical questions remain.”)