Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RetinalGPT: A Retinal Clinical Preference Conversational Assistant Powered by Large Vision-Language Models (2503.03987v1)

Published 6 Mar 2025 in cs.CV, cs.AI, cs.CL, and cs.LG
RetinalGPT: A Retinal Clinical Preference Conversational Assistant Powered by Large Vision-Language Models

Abstract: Recently, Multimodal LLMs (MLLMs) have gained significant attention for their remarkable ability to process and analyze non-textual data, such as images, videos, and audio. Notably, several adaptations of general-domain MLLMs to the medical field have been explored, including LLaVA-Med. However, these medical adaptations remain insufficiently advanced in understanding and interpreting retinal images. In contrast, medical experts emphasize the importance of quantitative analyses for disease detection and interpretation. This underscores a gap between general-domain and medical-domain MLLMs: while general-domain MLLMs excel in broad applications, they lack the specialized knowledge necessary for precise diagnostic and interpretative tasks in the medical field. To address these challenges, we introduce \textit{RetinalGPT}, a multimodal conversational assistant for clinically preferred quantitative analysis of retinal images. Specifically, we achieve this by compiling a large retinal image dataset, developing a novel data pipeline, and employing customized visual instruction tuning to enhance both retinal analysis and enrich medical knowledge. In particular, RetinalGPT outperforms MLLM in the generic domain by a large margin in the diagnosis of retinal diseases in 8 benchmark retinal datasets. Beyond disease diagnosis, RetinalGPT features quantitative analyses and lesion localization, representing a pioneering step in leveraging LLMs for an interpretable and end-to-end clinical research framework. The code is available at https://github.com/Retinal-Research/RetinalGPT

The paper "RetinalGPT: A Retinal Clinical Preference Conversational Assistant Powered by Large Vision-LLMs" introduces RetinalGPT, a novel multimodal conversational assistant designed for the quantitative analysis of retinal images using Multimodal LLMs (MLLMs). The paper addresses the limitations of general-domain MLLMs in performing specialized tasks such as the interpretation of retinal images, crucial for diagnosing ocular diseases. The authors highlight the gap between general-domain and medical-domain MLLMs and propose RetinalGPT to bridge this gap by enhancing retinal disease diagnosis capabilities.

Key Contributions:

  1. Retinal-Specific Dataset and Pipeline:
    • The paper details the creation of a large, diverse dataset of approximately 38,000 retinal images. This dataset is enriched with disease labels, lesion bounding boxes, and vascular features. The data pipeline includes clinical data extraction using tools like AutoMorph for fractal analysis of retinal vascular structures, assigning clinically meaningful features to each image.
  2. Instruction Tuning and Training:
    • RetinalGPT employs a customized visual instruction tuning method to enhance its retinal analysis capabilities. By employing a two-stage training strategy, the model aligns generic-domain VLMs to be effective in retinal domain tasks while preserving broader biomedical knowledge:
      • Stage 1 (Feature Alignment): Mixup of retinal-specific and general biomedical datasets is used to tune the model, maintaining generic medical domain knowledge.
      • Stage 2 (Mixup Instruction-Tuning): Fine-tuning on a mixed dataset combining retinal-specific instruction data with generic medical data helps retain general medical understanding alongside retinal-specific capabilities.
  3. Performance and Evaluation:
    • RetinalGPT is evaluated against several state-of-the-art models on eight benchmark datasets covering multiple ophthalmic diseases. It demonstrates superior performance, particularly in disease diagnosis, lesion localization, and vascular structure analysis.
    • Results showcase RetinalGPT's successful lesion localization capability, predicting lesion bounding boxes with high accuracy compared to ground truth annotations. It also accurately estimates vascular feature values, validating the precision of its analysis.
  4. Generalization to Generic Medical Domain:
    • The model's response similarity to the LLaVA-Med model when tested on generic medical questions indicates that RetinalGPT preserves knowledge beyond the retinal domain. This demonstrates its extensive potential applicability in broader medical imaging contexts.

Conclusion:

RetinalGPT marks a significant advancement in retinal image analysis by leveraging large-scale multimodal models to improve clinical diagnostics' quantitative and interpretative dimensions. It stands out for its capability to integrate both extensive biomedical domain knowledge and focused retinal domain expertise to facilitate detailed and interpretable end-to-end clinical frameworks. The authors note the model's limitation regarding modality-centric initial responses and plan to address this in future work to enhance conversational dynamics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Wenhui Zhu (39 papers)
  2. Xin Li (980 papers)
  3. Xiwen Chen (45 papers)
  4. Peijie Qiu (35 papers)
  5. Vamsi Krishna Vasa (6 papers)
  6. Xuanzhao Dong (12 papers)
  7. Yanxi Chen (21 papers)
  8. Natasha Lepore (7 papers)
  9. Oana Dumitrascu (4 papers)
  10. Yi Su (70 papers)
  11. Yalin Wang (72 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com