Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RJUA-MedDQA: A Multimodal Benchmark for Medical Document Question Answering and Clinical Reasoning (2402.14840v1)

Published 19 Feb 2024 in cs.CL, cs.AI, and stat.AP

Abstract: Recent advancements in LLMs and Large Multi-modal Models (LMMs) have shown potential in various medical applications, such as Intelligent Medical Diagnosis. Although impressive results have been achieved, we find that existing benchmarks do not reflect the complexity of real medical reports and specialized in-depth reasoning capabilities. In this work, we introduced RJUA-MedDQA, a comprehensive benchmark in the field of medical specialization, which poses several challenges: comprehensively interpreting imgage content across diverse challenging layouts, possessing numerical reasoning ability to identify abnormal indicators and demonstrating clinical reasoning ability to provide statements of disease diagnosis, status and advice based on medical contexts. We carefully design the data generation pipeline and proposed the Efficient Structural Restoration Annotation (ESRA) Method, aimed at restoring textual and tabular content in medical report images. This method substantially enhances annotation efficiency, doubling the productivity of each annotator, and yields a 26.8% improvement in accuracy. We conduct extensive evaluations, including few-shot assessments of 5 LMMs which are capable of solving Chinese medical QA tasks. To further investigate the limitations and potential of current LMMs, we conduct comparative experiments on a set of strong LLMs by using image-text generated by ESRA method. We report the performance of baselines and offer several observations: (1) The overall performance of existing LMMs is still limited; however LMMs more robust to low-quality and diverse-structured images compared to LLMs. (3) Reasoning across context and image content present significant challenges. We hope this benchmark helps the community make progress on these challenging tasks in multi-modal medical document understanding and facilitate its application in healthcare.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Congyun Jin (2 papers)
  2. Ming Zhang (313 papers)
  3. Xiaowei Ma (3 papers)
  4. Li Yujiao (1 paper)
  5. Yingbo Wang (14 papers)
  6. Yabo Jia (1 paper)
  7. Yuliang Du (2 papers)
  8. Tao Sun (143 papers)
  9. Haowen Wang (25 papers)
  10. Cong Fan (6 papers)
  11. Jinjie Gu (50 papers)
  12. Chenfei Chi (3 papers)
  13. Xiangguo Lv (1 paper)
  14. Fangzhou Li (5 papers)
  15. Wei Xue (149 papers)
  16. Yiran Huang (13 papers)
Citations (2)