Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language Models (2303.05977v2)
Abstract: Medical Visual Question Answering (VQA) is an important challenge, as it would lead to faster and more accurate diagnoses and treatment decisions. Most existing methods approach it as a multi-class classification problem, which restricts the outcome to a predefined closed-set of curated answers. We focus on open-ended VQA and motivated by the recent advances in LLMs consider it as a generative task. Leveraging pre-trained LLMs, we introduce a novel method particularly suited for small, domain-specific, medical datasets. To properly communicate the medical images to the LLM, we develop a network that maps the extracted visual features to a set of learnable tokens. Then, alongside the question, these learnable tokens directly prompt the LLM. We explore recent parameter-efficient fine-tuning strategies for LLMs, which allow for resource- and data-efficient fine-tuning. We evaluate our approach on the prime medical VQA benchmarks, namely, Slake, OVQA and PathVQA. The results demonstrate that our approach outperforms existing methods across various training settings while also being computationally efficient.
- Tom van Sonsbeek (7 papers)
- Mohammad Mahdi Derakhshani (13 papers)
- Ivona Najdenkoska (9 papers)
- Cees G. M. Snoek (134 papers)
- Marcel Worring (55 papers)