Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language Models (2303.05977v2)

Published 10 Mar 2023 in cs.CV

Abstract: Medical Visual Question Answering (VQA) is an important challenge, as it would lead to faster and more accurate diagnoses and treatment decisions. Most existing methods approach it as a multi-class classification problem, which restricts the outcome to a predefined closed-set of curated answers. We focus on open-ended VQA and motivated by the recent advances in LLMs consider it as a generative task. Leveraging pre-trained LLMs, we introduce a novel method particularly suited for small, domain-specific, medical datasets. To properly communicate the medical images to the LLM, we develop a network that maps the extracted visual features to a set of learnable tokens. Then, alongside the question, these learnable tokens directly prompt the LLM. We explore recent parameter-efficient fine-tuning strategies for LLMs, which allow for resource- and data-efficient fine-tuning. We evaluate our approach on the prime medical VQA benchmarks, namely, Slake, OVQA and PathVQA. The results demonstrate that our approach outperforms existing methods across various training settings while also being computationally efficient.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Tom van Sonsbeek (7 papers)
  2. Mohammad Mahdi Derakhshani (13 papers)
  3. Ivona Najdenkoska (9 papers)
  4. Cees G. M. Snoek (134 papers)
  5. Marcel Worring (55 papers)
Citations (41)

Summary

We haven't generated a summary for this paper yet.