Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WilKE: Wise-Layer Knowledge Editor for Lifelong Knowledge Editing (2402.10987v2)

Published 16 Feb 2024 in cs.CL and cs.AI

Abstract: Knowledge editing aims to rectify inaccuracies in LLMs without costly retraining for outdated or erroneous knowledge. However, current knowledge editing methods primarily focus on single editing, failing to meet the requirements for lifelong editing. This study reveals a performance degradation encountered by knowledge editing in lifelong editing, characterized by toxicity buildup and toxicity flash, with the primary cause identified as pattern unmatch. We introduce a knowledge editing approach named Wise-Layer Knowledge Editor (WilKE), which selects editing layer based on the pattern matching degree of editing knowledge across different layers in LLMs. Experimental results demonstrate that, in lifelong editing, WilKE exhibits an average improvement of 46.2% and 67.8% on editing GPT2-XL and GPT-J relative to state-of-the-art knowledge editing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chenhui Hu (9 papers)
  2. Pengfei Cao (39 papers)
  3. Yubo Chen (58 papers)
  4. Kang Liu (207 papers)
  5. Jun Zhao (469 papers)
Citations (19)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets