Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs (2406.14544v1)

Published 20 Jun 2024 in cs.CV and cs.CL

Abstract: Vision LLMs (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a LLM. This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks. By combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs $10 \times$ larger on the rigorous multimodal benchmark MMStar. The project is released at: https://github.com/SparksJoe/Prism.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuxuan Qiao (4 papers)
  2. Haodong Duan (55 papers)
  3. Xinyu Fang (20 papers)
  4. Junming Yang (7 papers)
  5. Lin Chen (384 papers)
  6. Songyang Zhang (116 papers)
  7. Jiaqi Wang (218 papers)
  8. Dahua Lin (336 papers)
  9. Kai Chen (512 papers)
Citations (7)
Github Logo Streamline Icon: https://streamlinehq.com