Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SWEA: Updating Factual Knowledge in Large Language Models via Subject Word Embedding Altering (2401.17809v3)

Published 31 Jan 2024 in cs.CL, cs.AI, and cs.LG

Abstract: The general capabilities of LLMs make them the infrastructure for various AI applications, but updating their inner knowledge requires significant resources. Recent model editing is a promising technique for efficiently updating a small amount of knowledge of LLMs and has attracted much attention. In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge. Local editing methods update weights by computing least squares closed-form solutions and identify edited knowledge by vector-level matching in inference, which achieve promising results. However, these methods still require a lot of time and resources to complete the computation. Moreover, vector-level matching lacks reliability, and such updates disrupt the original organization of the model's parameters. To address these issues, we propose an detachable and expandable Subject Word Embedding Altering (SWEA) framework, which finds the editing embeddings through token-level matching and adds them to the subject word embeddings in Transformer input. To get these editing embeddings, we propose optimizing then suppressing fusion method, which first optimizes learnable embedding vectors for the editing target and then suppresses the Knowledge Embedding Dimensions (KEDs) to obtain final editing embeddings. We thus propose SWEA$\oplus$OS method for editing factual knowledge in LLMs. We demonstrate the overall state-of-the-art (SOTA) performance of SWEA$\oplus$OS on the \textsc{CounterFact} and zsRE datasets. To further validate the reasoning ability of SWEA$\oplus$OS in editing knowledge, we evaluate it on the more complex \textsc{RippleEdits} benchmark. The results demonstrate that SWEA$\oplus$OS possesses SOTA reasoning ability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Xiaopeng Li (166 papers)
  2. Shasha Li (57 papers)
  3. Bin Ji (28 papers)
  4. Shezheng Song (12 papers)
  5. Xi Wang (275 papers)
  6. Jun Ma (347 papers)
  7. Jie Yu (98 papers)
  8. Xiaodong Liu (162 papers)
  9. Jing Wang (740 papers)
  10. Weimin Zhang (16 papers)
  11. Huijun Liu (20 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.