Sam-Guided Enhanced Fine-Grained Encoding with Mixed Semantic Learning for Medical Image Captioning (2311.01004v2)
Abstract: With the development of multimodality and LLMs, the deep learning-based technique for medical image captioning holds the potential to offer valuable diagnostic recommendations. However, current generic text and image pre-trained models do not yield satisfactory results when it comes to describing intricate details within medical images. In this paper, we present a novel medical image captioning method guided by the segment anything model (SAM) to enable enhanced encoding with both general and detailed feature extraction. In addition, our approach employs a distinctive pre-training strategy with mixed semantic learning to simultaneously capture both the overall information and finer details within medical images. We demonstrate the effectiveness of this approach, as it outperforms the pre-trained BLIP2 model on various evaluation metrics for generating descriptions of medical images.
- “Medclip: Contrastive learning from unpaired medical images and text,” 2022.
- “Contrastive learning of medical visual representations from paired images and text,” in Machine Learning for Healthcare Conference. PMLR, 2022, pp. 2–25.
- “Medklip: Medical knowledge enhanced language-image pre-training,” medRxiv, pp. 2023–01, 2023.
- “Align before fuse: Vision and language representation learning with momentum distillation,” in Advances in neural information processing systems, 2021.
- “Learning transferable visual models from natural language supervision,” 2021.
- “Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation,” in International Conference on Machine Learning, 2022.
- “Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models,” arXiv preprint arXiv:2301.12597, 2023.
- Zhu Yi and Li Xiu, “A survey of medical image captioning technique: encoding, decoding and latest advance,” Journal of Image and Graphics, 2023.
- “Segment anything,” arXiv preprint arXiv:2304.02643, 2023.
- “Microsoft coco: Common objects in context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 2014, pp. 740–755.
- “Medicat: A dataset of medical images, captions, and textual references,” arXiv preprint arXiv:2010.06000, 2020.
- “Radiology objects in context (roco): a multimodal image dataset,” in Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis: 7th Joint International Workshop, CVII-STENT 2018 and Third International Workshop, LABELS 2018, Held in Conjunction with MICCAI, 2018.
- “Opt: Open pre-trained transformer language models,” arXiv preprint arXiv:2205.01068, 2022.
- “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 16000–16009.
- “Bleu: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
- S. Banerjee and A. Lavie, “Meteor: An automatic metric for mt evaluation with improved correlation with human judgments,” in Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 2005.
- Chin-Yew Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, 2004.
- “Cider: Consensus-based image description evaluation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015.
- “Bertscore: Evaluating text generation with bert,” arXiv preprint arXiv:1904.09675, 2019.
- “Bartscore: Evaluating generated text as text generation,” Advances in Neural Information Processing Systems, 2021.
- “Bleurt: Learning robust metrics for text generation,” in Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020.