Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark (2403.07350v3)

Published 12 Mar 2024 in cs.CL, cs.AI, and cs.CV

Abstract: Recently, knowledge editing on LLMs has received considerable attention. Compared to this, editing Large Vision-LLMs (LVLMs) faces extra challenges from diverse data modalities and complicated model components, and data for LVLMs editing are limited. The existing LVLM editing benchmark, which comprises three metrics (Reliability, Locality, and Generality), falls short in the quality of synthesized evaluation images and cannot assess whether models apply edited knowledge in relevant content. Therefore, we employ more reliable data collection methods to construct a new Large $\textbf{V}$ision-$\textbf{L}$anguage Model $\textbf{K}$nowledge $\textbf{E}$diting $\textbf{B}$enchmark, $\textbf{VLKEB}$, and extend the Portability metric for more comprehensive evaluation. Leveraging a multi-modal knowledge graph, our image data are bound with knowledge entities. This can be further used to extract entity-related knowledge, which constitutes the base of editing data. We conduct experiments of different editing methods on five LVLMs, and thoroughly analyze how do they impact the models. The results reveal strengths and deficiencies of these methods and hopefully provide insights for future research. The codes and dataset are available at: https://github.com/VLKEB/VLKEB.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Han Huang (71 papers)
  2. Haitian Zhong (5 papers)
  3. Qiang Liu (405 papers)
  4. Shu Wu (109 papers)
  5. Liang Wang (512 papers)
  6. Tieniu Tan (119 papers)
  7. Tao Yu (282 papers)
Citations (5)