Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework (2504.10074v3)

Published 14 Apr 2025 in cs.AI

Abstract: Recent advancements in LLMs and multi-modal LLMs have been remarkable. However, these models still rely solely on their parametric knowledge, which limits their ability to generate up-to-date information and increases the risk of producing erroneous content. Retrieval-Augmented Generation (RAG) partially mitigates these challenges by incorporating external data sources, yet the reliance on databases and retrieval systems can introduce irrelevant or inaccurate documents, ultimately undermining both performance and reasoning quality. In this paper, we propose Multi-Modal Knowledge-Based Retrieval-Augmented Generation (MMKB-RAG), a novel multi-modal RAG framework that leverages the inherent knowledge boundaries of models to dynamically generate semantic tags for the retrieval process. This strategy enables the joint filtering of retrieved documents, retaining only the most relevant and accurate references. Extensive experiments on knowledge-based visual question-answering tasks demonstrate the efficacy of our approach: on the E-VQA dataset, our method improves performance by +4.2% on the Single-Hop subset and +0.4% on the full dataset, while on the InfoSeek dataset, it achieves gains of +7.8% on the Unseen-Q subset, +8.2% on the Unseen-E subset, and +8.1% on the full dataset. These results highlight significant enhancements in both accuracy and robustness over the current state-of-the-art MLLM and RAG frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zihan Ling (1 paper)
  2. Zhiyao Guo (2 papers)
  3. Yixuan Huang (40 papers)
  4. Yi An (6 papers)
  5. Shuai Xiao (31 papers)
  6. Jinsong Lan (11 papers)
  7. Xiaoyong Zhu (12 papers)
  8. Bo Zheng (205 papers)

Summary

We haven't generated a summary for this paper yet.