Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BMIKE-53: Investigating Cross-Lingual Knowledge Editing with In-Context Learning (2406.17764v1)

Published 25 Jun 2024 in cs.CL and cs.AI

Abstract: LLMs possess extensive parametric knowledge, but this knowledge is difficult to update with new information because retraining is very expensive and infeasible for closed-source models. Knowledge editing (KE) has emerged as a viable solution for updating the knowledge of LLMs without compromising their overall performance. On-the-fly KE methods, inspired by in-context learning (ICL), have shown great promise and allow LLMs to be treated as black boxes. In the past, KE was primarily employed in English contexts, whereas the potential for cross-lingual KE in current English-centric LLMs has not been fully explored. To foster more research in this direction, we introduce the BMIKE-53 benchmark for evaluating cross-lingual KE on 53 diverse languages across three KE task types. We also propose a gradient-free KE method called Multilingual In-context Knowledge Editing (MIKE) and evaluate it on BMIKE-53. Our evaluation focuses on cross-lingual knowledge transfer in terms of reliability, generality, locality, and portability, offering valuable insights and a framework for future research in cross-lingual KE. Our code and data are publicly accessible via the anonymous repository at https://anonymous.4open.science/r/MIKE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ercong Nie (25 papers)
  2. Bo Shao (8 papers)
  3. Zifeng Ding (26 papers)
  4. Mingyang Wang (36 papers)
  5. Helmut Schmid (20 papers)
  6. Hinrich Schütze (250 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com