Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretable Medical Image Visual Question Answering via Multi-Modal Relationship Graph Learning (2302.09636v1)

Published 19 Feb 2023 in cs.CV

Abstract: Medical visual question answering (VQA) aims to answer clinically relevant questions regarding input medical images. This technique has the potential to improve the efficiency of medical professionals while relieving the burden on the public health system, particularly in resource-poor countries. Existing medical VQA methods tend to encode medical images and learn the correspondence between visual features and questions without exploiting the spatial, semantic, or medical knowledge behind them. This is partially because of the small size of the current medical VQA dataset, which often includes simple questions. Therefore, we first collected a comprehensive and large-scale medical VQA dataset, focusing on chest X-ray images. The questions involved detailed relationships, such as disease names, locations, levels, and types in our dataset. Based on this dataset, we also propose a novel baseline method by constructing three different relationship graphs: spatial relationship, semantic relationship, and implicit relationship graphs on the image regions, questions, and semantic labels. The answer and graph reasoning paths are learned for different questions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xinyue Hu (27 papers)
  2. Lin Gu (143 papers)
  3. Kazuma Kobayashi (25 papers)
  4. Qiyuan An (8 papers)
  5. Qingyu Chen (57 papers)
  6. Zhiyong Lu (113 papers)
  7. Chang Su (37 papers)
  8. Tatsuya Harada (142 papers)
  9. Yingying Zhu (39 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.