Editing Conceptual Knowledge for Large Language Models (2403.06259v2)
Abstract: Recently, there has been a growing interest in knowledge editing for LLMs. Current approaches and evaluations merely explore the instance-level editing, while whether LLMs possess the capability to modify concepts remains unclear. This paper pioneers the investigation of editing conceptual knowledge for LLMs, by constructing a novel benchmark dataset ConceptEdit and establishing a suite of new metrics for evaluation. The experimental results reveal that, although existing editing methods can efficiently modify concept-level definition to some extent, they also have the potential to distort the related instantial knowledge in LLMs, leading to poor performance. We anticipate this can inspire further progress in better understanding LLMs. Our project homepage is available at https://zjunlp.github.io/project/ConceptEdit.
- Xiaohan Wang (91 papers)
- Shengyu Mao (11 papers)
- Ningyu Zhang (148 papers)
- Shumin Deng (65 papers)
- Yunzhi Yao (27 papers)
- Yue Shen (243 papers)
- Lei Liang (37 papers)
- Jinjie Gu (50 papers)
- Huajun Chen (198 papers)