Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Editing Conceptual Knowledge for Large Language Models (2403.06259v2)

Published 10 Mar 2024 in cs.CL, cs.AI, cs.DB, cs.IR, and cs.LG

Abstract: Recently, there has been a growing interest in knowledge editing for LLMs. Current approaches and evaluations merely explore the instance-level editing, while whether LLMs possess the capability to modify concepts remains unclear. This paper pioneers the investigation of editing conceptual knowledge for LLMs, by constructing a novel benchmark dataset ConceptEdit and establishing a suite of new metrics for evaluation. The experimental results reveal that, although existing editing methods can efficiently modify concept-level definition to some extent, they also have the potential to distort the related instantial knowledge in LLMs, leading to poor performance. We anticipate this can inspire further progress in better understanding LLMs. Our project homepage is available at https://zjunlp.github.io/project/ConceptEdit.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiaohan Wang (91 papers)
  2. Shengyu Mao (11 papers)
  3. Ningyu Zhang (148 papers)
  4. Shumin Deng (65 papers)
  5. Yunzhi Yao (27 papers)
  6. Yue Shen (243 papers)
  7. Lei Liang (37 papers)
  8. Jinjie Gu (50 papers)
  9. Huajun Chen (198 papers)
Citations (10)
Github Logo Streamline Icon: https://streamlinehq.com