Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models (2406.01436v2)

Published 3 Jun 2024 in cs.CL

Abstract: Knowledge editing is a rising technique for efficiently updating factual knowledge in LLMs with minimal alteration of parameters. However, recent studies have identified side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing. Despite these findings, evaluating the pitfalls of knowledge editing often relies on inconsistent metrics and benchmarks, lacking a uniform standard. In response, this survey presents a comprehensive study of these side effects, providing a unified perspective on the challenges of knowledge editing in LLMs by conducting experiments with consistent metrics and benchmarks. Additionally, we review related works and outline potential research directions to address these limitations. Our survey highlights the limitations of current knowledge editing methods, emphasizing the need for a deeper understanding of the inner knowledge structures of LLMs and improved knowledge editing methods. To foster future research, we have released the complementary materials publicly in https://github.com/MiuLab/EditLLM-Survey.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Cheng-Hsun Hsueh (1 paper)
  2. Paul Kuo-Ming Huang (3 papers)
  3. Tzu-Han Lin (8 papers)
  4. Che-Wei Liao (1 paper)
  5. Hung-Chieh Fang (4 papers)
  6. Chao-Wei Huang (28 papers)
  7. Yun-Nung Chen (104 papers)
Citations (3)
Youtube Logo Streamline Icon: https://streamlinehq.com