Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Co-Training for Semi-Supervised Image Segmentation (1903.11233v3)

Published 27 Mar 2019 in cs.CV

Abstract: In this paper, we aim to improve the performance of semantic image segmentation in a semi-supervised setting in which training is effectuated with a reduced set of annotated images and additional non-annotated images. We present a method based on an ensemble of deep segmentation models. Each model is trained on a subset of the annotated data, and uses the non-annotated images to exchange information with the other models, similar to co-training. Even if each model learns on the same non-annotated images, diversity is preserved with the use of adversarial samples. Our results show that this ability to simultaneously train models, which exchange knowledge while preserving diversity, leads to state-of-the-art results on two challenging medical image datasets.

Citations (162)

Summary

  • The paper presents a novel deep co-training approach for semi-supervised image segmentation using an ensemble of deep models with supervised, ensemble agreement, and adversarial diversity loss components.
  • Experiments on ACDC and SCGM datasets show the method outperforms state-of-the-art semi-supervised techniques, achieving performance closer to full supervision with limited labeled data.
  • This method improves segmentation accuracy and model robustness, which is crucial for medical imaging tasks with scarce labeled data, offering potential benefits in clinical applications.

Deep Co-Training for Semi-Supervised Image Segmentation

The paper "Deep Co-Training for Semi-Supervised Image Segmentation" presents a novel approach aimed at improving semantic image segmentation in scenarios with limited annotated data alongside an abundance of unannotated images. This research addresses the challenge of achieving high segmentation performance when only a fraction of data is labeled, a common scenario in medical imaging tasks due to the cost and complexity of manual annotation.

Overview of the Method

The authors introduce a method utilizing an ensemble of deep segmentation models, trained on subsets of annotated data while leveraging non-annotated images to exchange information, reminiscent of co-training strategies. Unlike traditional co-training, which typically relies on conditionally independent feature sets, this approach imposes diversity across models using adversarial samples.

The training process incorporates a composite loss function comprising three distinct components:

  • Supervised Loss: Ensures that network predictions remain consistent with ground truth labels on annotated images. The cross-entropy is used as the primary loss measure.
  • Ensemble Agreement Loss: Promotes consensus among various models' predictions on unlabeled data, employing Jensen-Shannon divergence to align the output distribution across the ensemble.
  • Diversity Loss: Leverages adversarial examples created from both annotated and unannotated images to encourage models within the ensemble to maintain diverse output boundaries.

Experimental Validation

The method's efficacy was demonstrated through a series of experiments on challenging datasets such as the Automated Cardiac Diagnosis Challenge (ACDC) and Spinal Cord Gray Matter Challenge (SCGM). Across these tasks, the proposed approach consistently outperformed several state-of-the-art semi-supervised segmentation methods, including Pseudo Label, Virtual Adversarial Training (VAT), and Mean Teacher.

Significantly, the inclusion of the adversarial diversity loss was shown to boost segmentation accuracy, with ensemble models trained using the co-training strategy achieving dice similarity coefficients (DSC) closer to those obtained under full supervision, even when the labeled dataset was notably sparse.

Implications and Future Work

This co-training strategy is imperative not only for improving the segmentation accuracy but also for promoting model robustness against adversarial perturbations, which are critical in medical image analysis. The ability to perform proficiently with limited labeled data could benefit clinical applications where data annotation is a bottleneck, offering potential improvements in workflow efficiency and diagnostic accuracy.

Looking forward, exploration into optimizing ensembles with minimal views or engaging generative approaches to further model diversity during training could prove beneficial. Additionally, expanding this framework's application to broader image domains outside of medical imaging could validate its adaptability and scalability in general computer vision tasks.

In summary, the paper adeptly navigates the intricacies of semi-supervised learning in image segmentation, presenting a compelling argument for the deployment of deep co-training methodologies in real-world imaging challenges. This approach not only broadens the scope of ensemble learning in semi-supervised contexts but also sets a foundation for future explorations into adversarial robustness and multi-model consensus strategies.