Targeted Visual Prompting for Medical Visual Question Answering (2408.03043v1)
Abstract: With growing interest in recent years, medical visual question answering (Med-VQA) has rapidly evolved, with multimodal LLMs (MLLMs) emerging as an alternative to classical model architectures. Specifically, their ability to add visual information to the input of pre-trained LLMs brings new capabilities for image interpretation. However, simple visual errors cast doubt on the actual visual understanding abilities of these models. To address this, region-based questions have been proposed as a means to assess and enhance actual visual understanding through compositional evaluation. To combine these two perspectives, this paper introduces targeted visual prompting to equip MLLMs with region-based questioning capabilities. By presenting the model with both the isolated region and the region in its context in a customized visual prompt, we show the effectiveness of our method across multiple datasets while comparing it to several baseline models. Our code and data are available at https://github.com/sergiotasconmorales/locvqaLLM.
- Sergio Tascon-Morales (4 papers)
- Pablo Márquez-Neila (26 papers)
- Raphael Sznitman (60 papers)