Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

R2GenCSR: Retrieving Context Samples for Large Language Model based X-ray Medical Report Generation (2408.09743v1)

Published 19 Aug 2024 in cs.CV, cs.AI, and cs.CL

Abstract: Inspired by the tremendous success of LLMs, existing X-ray medical report generation methods attempt to leverage large models to achieve better performance. They usually adopt a Transformer to extract the visual features of a given X-ray image, and then, feed them into the LLM for text generation. How to extract more effective information for the LLMs to help them improve final results is an urgent problem that needs to be solved. Additionally, the use of visual Transformer models also brings high computational complexity. To address these issues, this paper proposes a novel context-guided efficient X-ray medical report generation framework. Specifically, we introduce the Mamba as the vision backbone with linear complexity, and the performance obtained is comparable to that of the strong Transformer model. More importantly, we perform context retrieval from the training set for samples within each mini-batch during the training phase, utilizing both positively and negatively related samples to enhance feature representation and discriminative learning. Subsequently, we feed the vision tokens, context information, and prompt statements to invoke the LLM for generating high-quality medical reports. Extensive experiments on three X-ray report generation datasets (i.e., IU-Xray, MIMIC-CXR, CheXpert Plus) fully validated the effectiveness of our proposed model. The source code of this work will be released on \url{https://github.com/Event-AHU/Medical_Image_Analysis}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xiao Wang (508 papers)
  2. Yuehang Li (7 papers)
  3. Fuling Wang (7 papers)
  4. Shiao Wang (17 papers)
  5. Chuanfu Li (7 papers)
  6. Bo Jiang (236 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.