Probing the Visualization Literacy of Vision Language Models: the Good, the Bad, and the Ugly (2504.05445v1)
Abstract: Vision LLMs (VLMs) demonstrate promising chart comprehension capabilities. Yet, prior explorations of their visualization literacy have been limited to assessing their response correctness and fail to explore their internal reasoning. To address this gap, we adapted attention-guided class activation maps (AG-CAM) for VLMs, to visualize the influence and importance of input features (image and text) on model responses. Using this approach, we conducted an examination of four open-source (ChartGemma, Janus 1B and 7B, and LLaVA) and two closed-source (GPT-4o, Gemini) models comparing their performance and, for the open-source models, their AG-CAM results. Overall, we found that ChartGemma, a 3B parameter VLM fine-tuned for chart question-answering (QA), outperformed other open-source models and exhibited performance on par with significantly larger closed-source VLMs. We also found that VLMs exhibit spatial reasoning by accurately localizing key chart features, and semantic reasoning by associating visual elements with corresponding data values and query tokens. Our approach is the first to demonstrate the use of AG-CAM on early fusion VLM architectures, which are widely used, and for chart QA. We also show preliminary evidence that these results can align with human reasoning. Our promising open-source VLMs results pave the way for transparent and reproducible research in AI visualization literacy.