Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Knowledge distillation from multi-modal to mono-modal segmentation networks (2106.09564v1)

Published 17 Jun 2021 in cs.CV, cs.AI, and stat.ML

Abstract: The joint use of multiple imaging modalities for medical image segmentation has been widely studied in recent years. The fusion of information from different modalities has demonstrated to improve the segmentation accuracy, with respect to mono-modal segmentations, in several applications. However, acquiring multiple modalities is usually not possible in a clinical setting due to a limited number of physicians and scanners, and to limit costs and scan time. Most of the time, only one modality is acquired. In this paper, we propose KD-Net, a framework to transfer knowledge from a trained multi-modal network (teacher) to a mono-modal one (student). The proposed method is an adaptation of the generalized distillation framework where the student network is trained on a subset (1 modality) of the teacher's inputs (n modalities). We illustrate the effectiveness of the proposed framework in brain tumor segmentation with the BraTS 2018 dataset. Using different architectures, we show that the student network effectively learns from the teacher and always outperforms the baseline mono-modal network in terms of segmentation accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Minhao Hu (5 papers)
  2. Matthis Maillard (2 papers)
  3. Ya Zhang (222 papers)
  4. Tommaso Ciceri (2 papers)
  5. Giammarco La Barbera (4 papers)
  6. Isabelle Bloch (45 papers)
  7. Pietro Gori (34 papers)
Citations (104)

Summary

We haven't generated a summary for this paper yet.