ConvD: Attention Enhanced Dynamic Convolutional Embeddings for Knowledge Graph Completion (2312.07589v2)
Abstract: Knowledge graphs often suffer from incompleteness issues, which can be alleviated through information completion. However, current state-of-the-art deep knowledge convolutional embedding models rely on external convolution kernels and conventional convolution processes, which limits the feature interaction capability of the model. This paper introduces a novel dynamic convolutional embedding model, ConvD, which directly reshapes relation embeddings into multiple internal convolution kernels. This approach effectively enhances the feature interactions between relation embeddings and entity embeddings. Simultaneously, we incorporate a priori knowledge-optimized attention mechanism that assigns different contribution weight coefficients to the multiple relation convolution kernels in dynamic convolution, further boosting the expressive power of the model. Extensive experiments on various datasets show that our proposed model consistently outperforms the state-of-the-art baseline methods, with average improvements ranging from 3.28% to 14.69% across all model evaluation metrics, while the number of parameters is reduced by 50.66% to 85.40% compared to other state-of-the-art models.