Editing Language Model-based Knowledge Graph Embeddings (2301.10405v8)
Abstract: Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via LLMs. However, LLM-based KG embeddings are usually deployed as static artifacts, making them difficult to modify post-deployment without re-training after deployment. To address this issue, we propose a new task of editing LLM-based KG embeddings in this paper. This task is designed to facilitate rapid, data-efficient updates to KG embeddings without compromising the performance of other aspects. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hypernetwork to edit/add facts. Our comprehensive experimental results reveal that KGEditor excels in updating specific facts without impacting the overall performance, even when faced with limited training resources. Code and datasets are available in https://github.com/zjunlp/PromptKG/tree/main/deltaKG.
- Translating Embeddings for Modeling Multi-relational Data. In NeurIPS.
- Language Models are Few-Shot Learners. In NeurIPS.
- Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases. In ACL.
- Editing Factual Knowledge in Language Models. In EMNLP.
- Knowledge Is Flat: A Seq2Seq Generative Framework for Various Knowledge Graph Completion. In COLING.
- KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction. In WWW.
- Evaluating the Ripple Effects of Knowledge Editing in Language Models. CoRR, abs/2307.12976.
- Knowledge Neurons in Pretrained Transformers. In ACL.
- Convolutional 2D Knowledge Graph Embeddings. In AAAI.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Burstein, J.; Doran, C.; and Solorio, T., eds., NAACL.
- Calibrating Factual Knowledge in Pretrained Language Models. In EMNLP, Findings of EMNLP.
- Online Updates of Knowledge Graph Embedding. In Complex Networks 2021, volume 1016 of Studies in Computational Intelligence, 523–535. Springer.
- Dissecting Recall of Factual Associations in Auto-Regressive Language Models. CoRR, abs/2304.14767.
- Pre-trained models: Past, present and future. AI Open.
- A divide and conquer framework for Knowledge Editing. Knowledge-Based Systems, 110826.
- Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models. CoRR, abs/2301.04213.
- Semi-Supervised Classification with Graph Convolutional Networks. In ICLR.
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL.
- Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Bonet, B.; and Koenig, S., eds., AAAI.
- Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys.
- INDIGO: GNN-Based Inductive Knowledge Graph Completion Using Pair-Wise Encoding. In NeurIPS, 2034–2045.
- P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks. In ACL.
- Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach. In Findings of ACL.
- Locating and Editing Factual Knowledge in GPT. In NeurIPS.
- Fast Model Editing at Scale. In ICLR.
- Holographic Embeddings of Knowledge Graphs. In AAAI.
- A Three-Way Model for Collective Learning on Multi-Relational Data. In ICML.
- Unifying Large Language Models and Knowledge Graphs: A Roadmap. CoRR, abs/2306.08302.
- Language Models as Knowledge Bases? In EMNLP.
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res., 21: 140:1–140:67.
- Sequence-to-Sequence Knowledge Graph Completion and Question Answering. In ACL.
- Editable Neural Networks. In ICLR.
- RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In ICLR.
- Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford˙alpaca.
- Representing Text for Joint Embedding of Text and Knowledge Bases. In EMNLP.
- LLaMA: Open and Efficient Foundation Language Models. CoRR, abs/2302.13971.
- Composition-based Multi-Relational Graph Convolutional Networks. In ICLR.
- Graph Attention Networks. In ICLR.
- Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion. In WWW.
- SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models. In ACL.
- Knowledge Graph Embedding: A Survey of Approaches and Applications. TKDE.
- K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of ACL.
- KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Trans. Assoc. Comput. Linguistics, 9: 176–194.
- Language Models as Knowledge Embeddings. In IJCAI.
- Knowledge Graph Embedding by Translating on Hyperplanes. In AAAI.
- Incremental Update of Knowledge Graph Embedding by Rotating on Hyperplanes. In ICWS 2021, 516–524. IEEE.
- KC-GEE: Knowledge-based Conditioning for Generative Event Extraction.
- Towards relation extraction from speech. In Goldberg, Y.; Kozareva, Z.; and Zhang, Y., eds., Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, 10751–10762. Association for Computational Linguistics.
- Representation Learning of Knowledge Graphs with Entity Descriptions. In Schuurmans, D.; and Wellman, M. P., eds., AAAI.
- From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer. In WWW.
- Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In ICLR.
- Context-Aware Attentive Multilevel Feature Fusion for Named Entity Recognition. IEEE Transactions on Neural Networks and Learning Systems.
- HiTRANS: A Hierarchical Transformer Network for Nested Named Entity Recognition. In Findings of ACL.
- KG-BERT: BERT for Knowledge Graph Completion. CoRR, abs/1909.03193.
- Editing Large Language Models: Problems, Methods, and Opportunities. CoRR, abs/2305.13172.
- GLM-130B: An Open Bilingual Pre-trained Model. CoRR, abs/2210.02414.
- OntoProtein: Protein Pretraining With Gene Ontology Embedding. In ICLR.
- Relation Adversarial Network for Low Resource Knowledge Graph Completion. In WWW.
- Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. In ICLR.
- Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction. In AAAI.
- Pretrain-KGE: Learning Knowledge Representation from Pretrained Language Models. In Findings of EMNLP.
- Rethinking Graph Convolutional Networks in Knowledge Graph Completion. In WWW.
- A Survey of Large Language Models. CoRR, abs/2303.18223.