Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prototype Knowledge Distillation for Medical Segmentation with Missing Modality (2303.09830v2)

Published 17 Mar 2023 in cs.CV

Abstract: Multi-modality medical imaging is crucial in clinical treatment as it can provide complementary information for medical image segmentation. However, collecting multi-modal data in clinical is difficult due to the limitation of the scan time and other clinical situations. As such, it is clinically meaningful to develop an image segmentation paradigm to handle this missing modality problem. In this paper, we propose a prototype knowledge distillation (ProtoKD) method to tackle the challenging problem, especially for the toughest scenario when only single modal data can be accessed. Specifically, our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations, such that the student model could learn more robust feature representation from the teacher model and inference with only one single modality data. Our method achieves state-of-the-art performance on BraTS benchmark. The code is available at \url{https://github.com/SakurajimaMaiii/ProtoKD}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuai Wang (466 papers)
  2. Zipei Yan (7 papers)
  3. Daoan Zhang (24 papers)
  4. Haining Wei (2 papers)
  5. Zhongsen Li (6 papers)
  6. Rui Li (384 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.