Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Navigating the Dual Facets: A Comprehensive Evaluation of Sequential Memory Editing in Large Language Models (2402.11122v1)

Published 16 Feb 2024 in cs.CL and cs.AI

Abstract: Memory Editing (ME) has emerged as an efficient method to modify erroneous facts or inject new facts into LLMs. Two mainstream ME methods exist: parameter-modifying ME and parameter-preserving ME (integrating extra modules while preserving original parameters). Regrettably, previous studies on ME evaluation have two critical limitations: (i) evaluating LLMs with single edit only, neglecting the need for continuous editing, and (ii) evaluations focusing solely on basic factual triples, overlooking broader LLM capabilities like logical reasoning and reading understanding. This study addresses these limitations with contributions threefold: (i) We explore how ME affects a wide range of fundamental capabilities of LLMs under sequential editing. Experimental results reveal an intriguing phenomenon: Most parameter-modifying ME consistently degrade performance across all tasks after a few sequential edits. In contrast, parameter-preserving ME effectively maintains LLMs' fundamental capabilities but struggles to accurately recall edited knowledge presented in a different format. (ii) We extend our evaluation to different editing settings, such as layers to edit, model size, instruction tuning, etc. Experimental findings indicate several strategies that can potentially mitigate the adverse effects of ME. (iii) We further explain why parameter-modifying ME damages LLMs from three dimensions: parameter changes after editing, LLMing capability, and the in-context learning capability. Our in-depth study advocates more careful use of ME in real-world scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zihao Lin (22 papers)
  2. Mohammad Beigi (4 papers)
  3. Hongxuan Li (4 papers)
  4. Yufan Zhou (36 papers)
  5. Yuxiang Zhang (104 papers)
  6. Qifan Wang (129 papers)
  7. Wenpeng Yin (69 papers)
  8. Lifu Huang (91 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets