Language-guided Scale-aware MedSegmentor for Lesion Segmentation in Medical Imaging (2408.17347v3)
Abstract: In clinical practice, segmenting specific lesions based on the needs of physicians can significantly enhance diagnostic accuracy and treatment efficiency. However, conventional lesion segmentation models lack the flexibility to distinguish lesions according to specific requirements. Given the practical advantages of using text as guidance, we propose a novel model, Language-guided Scale-aware MedSegmentor (LSMS), which segments target lesions in medical images based on given textual expressions. We define this as a new task termed Referring Lesion Segmentation (RLS). To address the lack of suitable benchmarks for RLS, we construct a vision-language medical dataset named Reference Hepatic Lesion Segmentation (RefHL-Seg). LSMS incorporates two key designs: (i) Scale-Aware Vision-Language attention module, which performs visual feature extraction and vision-language alignment in parallel. By leveraging diverse convolutional kernels, this module acquires rich visual representations and interacts closely with linguistic features, thereby enhancing the model's capacity for precise object localization. (ii) Full-Scale Decoder, which globally models multi-modal features across multiple scales and captures complementary information between them to accurately delineate lesion boundaries. Additionally, we design a specialized loss function comprising both segmentation loss and vision-language contrastive loss to better optimize cross-modal learning. We validate the performance of LSMS on RLS as well as on conventional lesion segmentation tasks across multiple datasets. Our LSMS consistently achieves superior performance with significantly lower computational cost. Code and datasets will be released.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.