Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-Lingual Knowledge Editing in Large Language Models (2309.08952v2)

Published 16 Sep 2023 in cs.CL and cs.AI

Abstract: Knowledge editing aims to change LLMs' performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in LLMs, knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges. Data and codes are available at https://github.com/krystalan/Bi_ZsRE

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiaan Wang (35 papers)
  2. Yunlong Liang (33 papers)
  3. Zengkui Sun (7 papers)
  4. Yuxuan Cao (5 papers)
  5. Jiarong Xu (24 papers)
  6. Fandong Meng (174 papers)
Citations (11)
Github Logo Streamline Icon: https://streamlinehq.com