Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge Editing (2402.14835v1)

Published 18 Feb 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal LLMs (MLLMs). Despite its potential, current benchmarks predominantly focus on coarse-grained knowledge, leaving the intricacies of fine-grained (FG) multimodal entity knowledge largely unexplored. This gap presents a notable challenge, as FG entity recognition is pivotal for the practical deployment and effectiveness of MLLMs in diverse real-world scenarios. To bridge this gap, we introduce MIKE, a comprehensive benchmark and dataset specifically designed for the FG multimodal entity knowledge editing. MIKE encompasses a suite of tasks tailored to assess different perspectives, including Vanilla Name Answering, Entity-Level Caption, and Complex-Scenario Recognition. In addition, a new form of knowledge editing, Multi-step Editing, is introduced to evaluate the editing efficiency. Through our extensive evaluations, we demonstrate that the current state-of-the-art methods face significant challenges in tackling our proposed benchmark, underscoring the complexity of FG knowledge editing in MLLMs. Our findings spotlight the urgent need for novel approaches in this domain, setting a clear agenda for future research and development efforts within the community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiaqi Li (142 papers)
  2. Miaozeng Du (2 papers)
  3. Chuanyi Zhang (19 papers)
  4. Yongrui Chen (23 papers)
  5. Nan Hu (34 papers)
  6. Guilin Qi (60 papers)
  7. Haiyun Jiang (34 papers)
  8. Siyuan Cheng (41 papers)
  9. Bozhong Tian (13 papers)
Citations (10)