Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting the potential of unlabeled endoscopic video data with self-supervised learning (1711.09726v3)

Published 27 Nov 2017 in cs.CV

Abstract: Surgical data science is a new research field that aims to observe all aspects of the patient treatment process in order to provide the right assistance at the right time. Due to the breakthrough successes of deep learning-based solutions for automatic image annotation, the availability of reference annotations for algorithm training is becoming a major bottleneck in the field. The purpose of this paper was to investigate the concept of self-supervised learning to address this issue. Our approach is guided by the hypothesis that unlabeled video data can be used to learn a representation of the target domain that boosts the performance of state-of-the-art machine learning algorithms when used for pre-training. Core of the method is an auxiliary task based on raw endoscopic video data of the target domain that is used to initialize the convolutional neural network (CNN) for the target task. In this paper, we propose the re-colorization of medical images with a generative adversarial network (GAN)-based architecture as auxiliary task. A variant of the method involves a second pre-training step based on labeled data for the target task from a related domain. We validate both variants using medical instrument segmentation as target task. The proposed approach can be used to radically reduce the manual annotation effort involved in training CNNs. Compared to the baseline approach of generating annotated data from scratch, our method decreases exploratively the number of labeled images by up to 75% without sacrificing performance. Our method also outperforms alternative methods for CNN pre-training, such as pre-training on publicly available non-medical or medical data using the target task (in this instance: segmentation). As it makes efficient use of available (non-)public and (un-)labeled data, the approach has the potential to become a valuable tool for CNN (pre-)training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. David Zimmerer (21 papers)
  2. Anant Vemuri (4 papers)
  3. Fabian Isensee (74 papers)
  4. Manuel Wiesenfarth (14 papers)
  5. Sebastian Bodenstedt (24 papers)
  6. Fabian Both (3 papers)
  7. Philip Kessler (1 paper)
  8. Martin Wagner (30 papers)
  9. Beat Müller (2 papers)
  10. Hannes Kenngott (13 papers)
  11. Stefanie Speidel (43 papers)
  12. Annette Kopp-Schneider (24 papers)
  13. Klaus Maier-Hein (59 papers)
  14. Lena Maier-Hein (82 papers)
  15. Tobias Ross (4 papers)
Citations (128)

Summary

We haven't generated a summary for this paper yet.