Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Time Sensitive Knowledge Editing through Efficient Finetuning (2406.04496v2)

Published 6 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: LLMs have demonstrated impressive capability in different tasks and are bringing transformative changes to many domains. However, keeping the knowledge in LLMs up-to-date remains a challenge once pretraining is complete. It is thus essential to design effective methods to both update obsolete knowledge and induce new knowledge into LLMs. Existing locate-and-edit knowledge editing (KE) method suffers from two limitations. First, the post-edit LLMs by such methods generally have poor capability in answering complex queries that require multi-hop reasoning. Second, the long run-time of such locate-and-edit methods to perform knowledge edits make it infeasible for large scale KE in practice. In this paper, we explore Parameter-Efficient Fine-Tuning (PEFT) techniques as an alternative for KE. We curate a more comprehensive temporal KE dataset with both knowledge update and knowledge injection examples for KE performance benchmarking. We further probe the effect of fine-tuning on a range of layers in an LLM for the multi-hop QA task. We find that PEFT performs better than locate-and-edit techniques for time-sensitive knowledge edits.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiou Ge (13 papers)
  2. Ali Mousavi (24 papers)
  3. Edouard Grave (56 papers)
  4. Armand Joulin (81 papers)
  5. Kun Qian (87 papers)
  6. Benjamin Han (9 papers)
  7. Mostafa Arefiyan (3 papers)
  8. Yunyao Li (43 papers)
Citations (4)