Towards Continual Knowledge Graph Embedding via Incremental Distillation (2405.04453v1)
Abstract: Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges. To address this issue, the continual knowledge graph embedding (CKGE) task has been proposed to train the KGE model by learning emerging knowledge efficiently while simultaneously preserving decent old knowledge. However, the explicit graph structure in KGs, which is critical for the above goal, has been heavily ignored by existing CKGE methods. On the one hand, existing methods usually learn new triples in a random order, destroying the inner structure of new KGs. On the other hand, old triples are preserved with equal priority, failing to alleviate catastrophic forgetting effectively. In this paper, we propose a competitive method for CKGE based on incremental distillation (IncDE), which considers the full use of the explicit graph structure in KGs. First, to optimize the learning order, we introduce a hierarchical strategy, ranking new triples for layer-by-layer learning. By employing the inter- and intra-hierarchical orders together, new triples are grouped into layers based on the graph structure features. Secondly, to preserve the old knowledge effectively, we devise a novel incremental distillation mechanism, which facilitates the seamless transfer of entity representations from the previous layer to the next one, promoting old knowledge preservation. Finally, we adopt a two-stage training paradigm to avoid the over-corruption of old knowledge influenced by under-trained new knowledge. Experimental results demonstrate the superiority of IncDE over state-of-the-art baselines. Notably, the incremental distillation mechanism contributes to improvements of 0.2%-6.5% in the mean reciprocal rank (MRR) score.
- Dbpedia-a crystallization point for the web of data. Journal of web semantics, 7(3): 154–165.
- Translating embeddings for modeling multi-relational data. In NIPS.
- Open question answering with weakly supervised embedding models. In ECML-PKDD.
- Lifelong embedding learning and transfer for growing knowledge graphs. In AAAI.
- Continual learning of knowledge graph embeddings. IEEE Robotics and Automation Letters, 6(2): 1128–1135.
- DBpedia. 2021. DBpedia - A community-driven knowledge graph. https://wiki.dbpedia.org/. Accessed: 2023-08-01.
- Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In SIGKDD.
- Graph lifelong learning: A survey. IEEE Computational Intelligence Magazine, 18(1): 32–51.
- Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In IJCAI.
- Simple embedding for link prediction in knowledge graphs. NeurIPS.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521–3526.
- Disentangle-based Continual Graph Representation Learning. In EMNLP.
- Fastre: Towards fast relation extraction with convolutional encoder and improved cascade binary tagging framework. In IJCAI.
- A Survey of Knowledge Graph Reasoning on Graph Types: Static. Dynamic, and Multimodal.
- Overcoming catastrophic forgetting in graph neural networks. In AAAI.
- IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings. In AAAI.
- AprilE: Attention with pseudo residual connection for knowledge graph embedding. In COLING.
- Core50: a new dataset and benchmark for continuous object recognition. In CoRL.
- Gradient episodic memory for continual learning. In NeurIPS.
- Industry-scale Knowledge Graphs: Lessons and Challenges: Five diverse technology companies show how it’s done. Queue, 17(2): 48–75.
- Hyperbolic hierarchy-aware knowledge graph embedding for link prediction. In EMNLP.
- PyTorch: An imperative style, high-performance deep learning library. In NeurIPS.
- Knowledge graph embedding for link prediction: A comparative analysis. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(2): 1–49.
- Progressive neural networks. arXiv preprint arXiv:1606.04671.
- ASKRL: An Aligned-Spatial Knowledge Representation Learning Framework for Open-World Knowledge Graph. In ISWC.
- Enriching translation-based knowledge graph embeddings through continual learning. IEEE Access, 6: 60489–60497.
- Yago: A core of semantic knowledge. In WWW.
- RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In ICLR.
- Complex embeddings for simple link prediction. In ICML.
- Sentence Embedding Alignment for Lifelong Relation Extraction. In NAACL.
- Mulde: Multi-teacher knowledge distillation for low-dimensional knowledge graph embeddings. In WWW.
- Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12): 2724–2743.
- IncreGNN: Incremental Graph Neural Network Learning by Considering Node and Parameter Importance. In DASFAA.
- Continual learning through synaptic intelligence. In ICML.
- Overcoming catastrophic forgetting in graph neural networks with experience replay. In AAAI.
- Dualde: Dually distilling knowledge graph embedding for faster and cheaper reasoning. In WSDM.