Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Assessing Knowledge Editing in Language Models via Relation Perspective (2311.09053v1)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: Knowledge Editing (KE) for modifying factual knowledge in LLMs has been receiving increasing attention. However, existing knowledge editing methods are entity-centric, and it is unclear whether this approach is suitable for a relation-centric perspective. To address this gap, this paper constructs a new benchmark named RaKE, which focuses on Relation based Knowledge Editing. In this paper, we establish a suite of innovative metrics for evaluation and conduct comprehensive experiments involving various knowledge editing baselines. We notice that existing knowledge editing methods exhibit the potential difficulty in their ability to edit relations. Therefore, we further explore the role of relations in factual triplets within the transformer. Our research results confirm that knowledge related to relations is not only stored in the FFN network but also in the attention layers. This provides experimental support for future relation-based knowledge editing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yifan Wei (20 papers)
  2. Xiaoyan Yu (22 papers)
  3. Huanhuan Ma (10 papers)
  4. Fangyu Lei (19 papers)
  5. Yixuan Weng (28 papers)
  6. Ran Song (23 papers)
  7. Kang Liu (207 papers)
Citations (13)