Papers
Topics
Authors
Recent
Search
2000 character limit reached

Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language Models

Published 10 Mar 2023 in cs.CV | (2303.05977v2)

Abstract: Medical Visual Question Answering (VQA) is an important challenge, as it would lead to faster and more accurate diagnoses and treatment decisions. Most existing methods approach it as a multi-class classification problem, which restricts the outcome to a predefined closed-set of curated answers. We focus on open-ended VQA and motivated by the recent advances in LLMs consider it as a generative task. Leveraging pre-trained LLMs, we introduce a novel method particularly suited for small, domain-specific, medical datasets. To properly communicate the medical images to the LLM, we develop a network that maps the extracted visual features to a set of learnable tokens. Then, alongside the question, these learnable tokens directly prompt the LLM. We explore recent parameter-efficient fine-tuning strategies for LLMs, which allow for resource- and data-efficient fine-tuning. We evaluate our approach on the prime medical VQA benchmarks, namely, Slake, OVQA and PathVQA. The results demonstrate that our approach outperforms existing methods across various training settings while also being computationally efficient.

Citations (41)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.