Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Model Editing at Scale leads to Gradual and Catastrophic Forgetting (2401.07453v4)

Published 15 Jan 2024 in cs.CL, cs.AI, and cs.IR

Abstract: Editing knowledge in LLMs is an attractive capability to have which allows us to correct incorrectly learnt facts during pre-training, as well as update the model with an ever-growing list of new facts. While existing model editing techniques have shown promise, they are usually evaluated using metrics for reliability, specificity and generalization over one or few edits. We argue that for model editing to have practical utility, we must be able to make multiple edits to the same model. With this in mind, we evaluate the current model editing methods at scale, focusing on two state of the art methods: ROME and MEMIT. We find that as the model is edited sequentially with multiple facts, it continually forgets previously edited facts and the ability to perform downstream tasks. This forgetting happens in two phases -- an initial gradual but progressive forgetting phase followed by abrupt or catastrophic forgetting phase. Both gradual and catastrophic forgetting limit the usefulness of model editing methods at scale -- the former making model editing less effective as multiple edits are made to the model while the latter caps the scalability of such model editing methods. Our analysis also highlights other key limitations of ROME and MEMIT at scale. With our work, we push for the development and evaluation of model editing methods keeping scalability in mind.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646.
  2. A brief review of hypernetworks in deep learning. arXiv preprint arXiv:2306.06955.
  3. Evaluating the ripple effects of knowledge editing in language models. arXiv preprint arXiv:2307.12976.
  4. Editing factual knowledge in language models. arXiv preprint arXiv:2104.08164.
  5. Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing (IWP2005).
  6. A survey for in-context learning. arXiv preprint arXiv:2301.00234.
  7. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913.
  8. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211.
  9. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526.
  10. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474.
  11. Unveiling the pitfalls of knowledge editing for large language models. arXiv preprint arXiv:2310.02129.
  12. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372.
  13. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229.
  14. Fast model editing at scale. arXiv preprint arXiv:2110.11309.
  15. Memory-based model editing at scale. In International Conference on Machine Learning, pages 15817–15831. PMLR.
  16. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
  17. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642.
  18. Attention is all you need. Advances in neural information processing systems, 30.
  19. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
  20. Editing large language models: Problems, methods, and opportunities. arXiv preprint arXiv:2305.13172.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Akshat Gupta (41 papers)
  2. Anurag Rao (8 papers)
  3. Gopala Anumanchipalli (30 papers)
Citations (33)