Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Simple Framework Uniting Visual In-context Learning with Masked Image Modeling to Improve Ultrasound Segmentation (2402.14300v3)

Published 22 Feb 2024 in cs.CV

Abstract: Conventional deep learning models deal with images one-by-one, requiring costly and time-consuming expert labeling in the field of medical imaging, and domain-specific restriction limits model generalizability. Visual in-context learning (ICL) is a new and exciting area of research in computer vision. Unlike conventional deep learning, ICL emphasizes the model's ability to adapt to new tasks based on given examples quickly. Inspired by MAE-VQGAN, we proposed a new simple visual ICL method called SimICL, combining visual ICL pairing images with masked image modeling (MIM) designed for self-supervised learning. We validated our method on bony structures segmentation in a wrist ultrasound (US) dataset with limited annotations, where the clinical objective was to segment bony structures to help with further fracture detection. We used a test set containing 3822 images from 18 patients for bony region segmentation. SimICL achieved an remarkably high Dice coeffient (DC) of 0.96 and Jaccard Index (IoU) of 0.92, surpassing state-of-the-art segmentation and visual ICL models (a maximum DC 0.86 and IoU 0.76), with SimICL DC and IoU increasing up to 0.10 and 0.16. This remarkably high agreement with limited manual annotations indicates SimICL could be used for training AI models even on small US datasets. This could dramatically decrease the human expert time required for image labeling compared to conventional approaches, and enhance the real-world use of AI assistance in US image analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. S. He, R. Bao, J. Li, J. Stout, A. Bjornerud, P. E. Grant, and Y. Ou, “Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets.” [Online]. Available: http://arxiv.org/abs/2304.09324
  2. J. Knight, Y. Zhou, C. Keen, A. R. Hareendranathan, F. Alves-Pereira, S. Ghasseminia, S. Wichuk, A. Brilz, D. Kirschner, and J. Jaremko, “2D/3D ultrasound diagnosis of pediatric distal radius fractures by human readers vs artificial intelligence,” Sci Rep, vol. 13, no. 1, p. 14535. [Online]. Available: https://www.nature.com/articles/s41598-023-41807-w
  3. C. Chen, W. Bai, R. H. Davies, A. N. Bhuva, C. H. Manisty, J. B. Augusto, J. C. Moon, N. Aung, A. M. Lee, M. M. Sanghvi, K. Fung, J. M. Paiva, S. E. Petersen, E. Lukaschuk, S. K. Piechnik, S. Neubauer, and D. Rueckert, “Improving the Generalizability of Convolutional Neural Network-Based Segmentation on CMR Images,” Front Cardiovasc Med, vol. 7, p. 105. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7344224/
  4. L. Xu, M. Xu, Y. Ke, X. An, S. Liu, and D. Ming, “Cross-Dataset Variability Problem in EEG Decoding With Deep Learning,” Front Hum Neurosci, vol. 14, p. 103. [Online]. Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7188358/
  5. B. Felfeliyan, N. D. Forkert, A. Hareendranathan, D. Cornel, Y. Zhou, G. Kuntze, J. L. Jaremko, and J. L. Ronsky, “Self-supervised-RCNN for medical image segmentation with limited data annotation,” Computerized Medical Imaging and Graphics, vol. 109, p. 102297. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0895611123001155
  6. Y. Zhou, J. Knight, B. Felfeliyan, S. Ghosh, F. Alves-Pereira, C. Keen, A. R. Hareendranathan, and J. L. Jaremko, “Self-Supervised Learning to More Efficiently Generate Segmentation Masks for Wrist Ultrasound,” Simplifying Medical Ultrasound, pp. 79–88.
  7. Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, L. Li, and Z. Sui, “A Survey on In-context Learning.” [Online]. Available: https://arxiv.org/abs/2301.00234v3
  8. J. Zhang, B. Wang, L. Li, Y. Nakashima, and H. Nagahara, “Instruct Me More! Random Prompting for Visual In-Context Learning,” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2597–2606. [Online]. Available: https://openaccess.thecvf.com/content/WACV2024/html/Zhang_Instruct_Me_More_Random_Prompting_for_Visual_In-Context_Learning_WACV_2024_paper.html
  9. H. Bahng, A. Jahanian, S. Sankaranarayanan, and P. Isola, “Exploring Visual Prompts for Adapting Large-Scale Models.” [Online]. Available: https://arxiv.org/abs/2203.17274v2
  10. J. Wu, X. Li, C. Wei, H. Wang, A. Yuille, Y. Zhou, and C. Xie, “Unleashing the Power of Visual Prompting At the Pixel Level.” [Online]. Available: https://arxiv.org/abs/2212.10556v2
  11. Y. Zhang, K. Zhou, and Z. Liu, “What Makes Good Examples for Visual In-Context Learning?” [Online]. Available: https://arxiv.org/abs/2301.13670v2
  12. Y. Sun, Q. Chen, J. Wang, J. Wang, and Z. Li, “Exploring Effective Factors for Improving Visual In-Context Learning.” [Online]. Available: https://arxiv.org/abs/2304.04748v1
  13. X. Wang, W. Wang, Y. Cao, C. Shen, and T. Huang, “Images Speak in Images: A Generalist Painter for In-Context Visual Learning,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6830–6839. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2023/html/Wang_Images_Speak_in_Images_A_Generalist_Painter_for_In-Context_Visual_CVPR_2023_paper.html
  14. X. Wang, X. Zhang, Y. Cao, W. Wang, C. Shen, and T. Huang, “SegGPT: Towards Segmenting Everything in Context,” Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1130–1140. [Online]. Available: https://openaccess.thecvf.com/content/ICCV2023/html/Wang_SegGPT_Towards_Segmenting_Everything_in_Context_ICCV_2023_paper.html
  15. Y. Liu, X. Chen, X. Ma, X. Wang, J. Zhou, Y. Qiao, and C. Dong, “Unifying Image Processing as Visual Prompting Question Answering.” [Online]. Available: https://arxiv.org/abs/2310.10513v1
  16. A. Bar, Y. Gandelsman, T. Darrell, A. Globerson, and A. Efros, “Visual Prompting via Image Inpainting,” Advances in Neural Information Processing Systems, vol. 35, pp. 25 005–25 017. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/hash/9f09f316a3eaf59d9ced5ffaefe97e0f-Abstract-Conference.html
  17. Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu, “SimMIM: A Simple Framework for Masked Image Modeling,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9653–9663. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html
  18. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.” [Online]. Available: http://arxiv.org/abs/2010.11929
  19. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pp. 234–241.
  20. F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation,” Nat Methods, vol. 18, no. 2, pp. 203–211. [Online]. Available: https://www.nature.com/articles/s41592-020-01008-z.
  21. K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969. [Online]. Available: https://openaccess.thecvf.com/content_iccv_2017/html/He_Mask_R-CNN_ICCV_2017_paper.html
  22. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked Autoencoders Are Scalable Vision Learners,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16 000–16 009. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper

Summary

We haven't generated a summary for this paper yet.