Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis (1804.06353v2)

Published 17 Apr 2018 in cs.CV

Abstract: Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods which can learn with less/other types of supervision, have been proposed. We review semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis/detection or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Veronika Cheplygina (52 papers)
  2. Josien P. W. Pluim (39 papers)
  3. Marleen De Bruijne (53 papers)
Citations (689)

Summary

A Survey of Alternative Supervision Methods in Medical Image Analysis

The paper "Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis" provides a comprehensive overview of machine learning approaches that circumvent the challenges posed by limited annotated data in medical imaging. The researchers, affiliated with prominent institutions in the Netherlands and Denmark, focus on semi-supervised learning (SSL), multiple instance learning (MIL), and transfer learning (TL), exploring their applications and potential in medical diagnosis and segmentation tasks.

Background and Motivation

Machine learning has become integral to medical image analysis, predominantly for tasks like segmentation and computer-aided diagnostics. However, the scarcity of annotated data remains a significant bottleneck. This paper reviews techniques that leverage less conventional forms of supervision to improve machine learning performance in this domain.

Semi-Supervised Learning (SSL)

In SSL, the model gains from both labeled and unlabeled data. The survey categorizes SSL approaches into methods like self-training, which extends labeled data through confident predictions, and graph-based methods, which use the data's structure to propagate labels. SSL has practical applications in various medical fields, including brain segmentation and Alzheimer's disease classification. The review also addresses the inherent risks when assumptions like data smoothness are violated.

Multiple Instance Learning (MIL)

MIL is applicable when global image labels are available, but local annotations are absent. The paper differentiates between using MIL for global detection (classifying entire images), local detection (classifying specific image regions), and false positive reduction (as seen in lesion classification). The methodologies discussed include bag-level and instance-level classifiers, with MIL being particularly relevant in histopathology and cancer detection tasks.

Transfer Learning (TL)

TL addresses scenarios where labeled data is scarce in the target domain by utilizing information from a related source domain. The paper identifies different facets of TL, such as feature and instance transfer, with applications spanning from disease classification in the brain to nodule analysis in the lungs. The paper also evaluates the burgeoning trend of leveraging large, non-medical datasets for pretraining, offering insights into the effectiveness of such methods depending on data similarity and diversity.

Trends and Implications

The paper highlights TLS’s rise in popularity, supported by accessible datasets and pretrained models, while SSL and MIL are constrained by the specific data types required. The application trends suggest a strong focus on brain imaging, with growing interest in histology and abdominal imaging. The researchers call for more cross-scenario studies to enhance understanding of inherent classification challenges and improve method selection.

Future Directions

Identified opportunities include leveraging additional weak or auxiliary labels that can be harnessed through multi-task learning paradigms or crowd-sourced annotations. The paper suggests that embracing adversarial training and unsupervised pretraining could further improve model robustness in the face of sparse labeled data.

Conclusion

This survey underscores the significance of alternative supervision strategies in overcoming data inadequacies in medical image analysis. By exploring the intersections and potential synergies between SSL, MIL, and TL, the paper provides a roadmap for future advancements in the field, encouraging broader applications and deeper investigation into underlying learning problems. Researchers are encouraged to carry forward these insights to enhance the efficacy and generalizability of not-so-supervised learning methodologies.