MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs (2406.08772v2)
Abstract: Current multimodal misinformation detection (MMD) methods often assume a single source and type of forgery for each sample, which is insufficient for real-world scenarios where multiple forgery sources coexist. The lack of a benchmark for mixed-source misinformation has hindered progress in this field. To address this, we introduce MMFakeBench, the first comprehensive benchmark for mixed-source MMD. MMFakeBench includes 3 critical sources: textual veracity distortion, visual veracity distortion, and cross-modal consistency distortion, along with 12 sub-categories of misinformation forgery types. We further conduct an extensive evaluation of 6 prevalent detection methods and 15 large vision-LLMs (LVLMs) on MMFakeBench under a zero-shot setting. The results indicate that current methods struggle under this challenging and realistic mixed-source MMD setting. Additionally, we propose an innovative unified framework, which integrates rationales, actions, and tool-use capabilities of LVLM agents, significantly enhancing accuracy and generalization. We believe this study will catalyze future research into more realistic mixed-source multimodal misinformation and provide a fair evaluation of misinformation detection methods.
- Xuannan Liu (17 papers)
- Zekun Li (73 papers)
- Peipei Li (29 papers)
- Shuhan Xia (3 papers)
- Xing Cui (13 papers)
- Linzhi Huang (7 papers)
- Huaibo Huang (58 papers)
- Weihong Deng (71 papers)
- Zhaofeng He (31 papers)