Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Large Language Models Bring to Text-rich VQA? (2311.07306v1)

Published 13 Nov 2023 in cs.CV

Abstract: Text-rich VQA, namely Visual Question Answering based on text recognition in the images, is a cross-modal task that requires both image comprehension and text recognition. In this work, we focus on investigating the advantages and bottlenecks of LLM-based approaches in addressing this problem. To address the above concern, we separate the vision and language modules, where we leverage external OCR models to recognize texts in the image and LLMs to answer the question given texts. The whole framework is training-free benefiting from the in-context ability of LLMs. This pipeline achieved superior performance compared to the majority of existing Multimodal LLMs (MLLM) on four text-rich VQA datasets. Besides, based on the ablation study, we find that LLM brings stronger comprehension ability and may introduce helpful knowledge for the VQA problem. The bottleneck for LLM to address text-rich VQA problems may primarily lie in visual part. We also combine the OCR module with MLLMs and pleasantly find that the combination of OCR module with MLLM also works. It's worth noting that not all MLLMs can comprehend the OCR information, which provides insights into how to train an MLLM that preserves the abilities of LLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuejing Liu (14 papers)
  2. Wei Tang (135 papers)
  3. Xinzhe Ni (6 papers)
  4. Jinghui Lu (28 papers)
  5. Rui Zhao (241 papers)
  6. Zechao Li (49 papers)
  7. Fei Tan (25 papers)
Citations (8)
Youtube Logo Streamline Icon: https://streamlinehq.com