Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Meta-Learned Modality-Weighted Knowledge Distillation for Robust Multi-Modal Learning with Missing Data (2405.07155v2)

Published 12 May 2024 in cs.CV

Abstract: In multi-modal learning, some modalities are more influential than others, and their absence can have a significant impact on classification/segmentation accuracy. Addressing this challenge, we propose a novel approach called Meta-learned Modality-weighted Knowledge Distillation (MetaKD), which enables multi-modal models to maintain high accuracy even when key modalities are missing. MetaKD adaptively estimates the importance weight of each modality through a meta-learning process. These learned importance weights guide a pairwise modality-weighted knowledge distillation process, allowing high-importance modalities to transfer knowledge to lower-importance ones, resulting in robust performance despite missing inputs. Unlike previous methods in the field, which are often task-specific and require significant modifications, our approach is designed to work in multiple tasks (e.g., segmentation and classification) with minimal adaptation. Experimental results on five prevalent datasets, including three Brain Tumor Segmentation datasets (BraTS2018, BraTS2019 and BraTS2020), the Alzheimer's Disease Neuroimaging Initiative (ADNI) classification dataset and the Audiovision-MNIST classification dataset, demonstrate the proposed model is able to outperform the compared models by a large margin.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Hu Wang (79 papers)
  2. Congbo Ma (23 papers)
  3. Yuyuan Liu (26 papers)
  4. Yuanhong Chen (30 papers)
  5. Yu Tian (249 papers)
  6. Jodie Avery (6 papers)
  7. Louise Hull (5 papers)
  8. Gustavo Carneiro (129 papers)
  9. Salma Hassan (3 papers)
  10. Yutong Xie (68 papers)
  11. Mostafa Salem (6 papers)
  12. Ian Reid (174 papers)
  13. Mohammad Yaqub (77 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com