Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare (2310.17956v2)
Abstract: LLMs have introduced a new era of proficiency in comprehending complex healthcare and biomedical topics. However, there is a noticeable lack of models in languages other than English and models that can interpret multi-modal input, which is crucial for global healthcare accessibility. In response, this study introduces Qilin-Med-VL, the first Chinese large vision-LLM designed to integrate the analysis of textual and visual data. Qilin-Med-VL combines a pre-trained Vision Transformer (ViT) with a foundational LLM. It undergoes a thorough two-stage curriculum training process that includes feature alignment and instruction tuning. This method enhances the model's ability to generate medical captions and answer complex medical queries. We also release ChiMed-VL, a dataset consisting of more than 1M image-text pairs. This dataset has been carefully curated to enable detailed and comprehensive interpretation of medical data using various types of images.
- Junling Liu (9 papers)
- Ziming Wang (59 papers)
- Qichen Ye (12 papers)
- Dading Chong (19 papers)
- Peilin Zhou (34 papers)
- Yining Hua (23 papers)