Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Robust Anymodal Segmentor with Unimodal and Cross-modal Distillation (2411.17141v1)

Published 26 Nov 2024 in cs.CV

Abstract: Simultaneously using multimodal inputs from multiple sensors to train segmentors is intuitively advantageous but practically challenging. A key challenge is unimodal bias, where multimodal segmentors over rely on certain modalities, causing performance drops when others are missing, common in real world applications. To this end, we develop the first framework for learning robust segmentor that can handle any combinations of visual modalities. Specifically, we first introduce a parallel multimodal learning strategy for learning a strong teacher. The cross-modal and unimodal distillation is then achieved in the multi scale representation space by transferring the feature level knowledge from multimodal to anymodal segmentors, aiming at addressing the unimodal bias and avoiding over-reliance on specific modalities. Moreover, a prediction level modality agnostic semantic distillation is proposed to achieve semantic knowledge transferring for segmentation. Extensive experiments on both synthetic and real-world multi-sensor benchmarks demonstrate that our method achieves superior performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xu Zheng (88 papers)
  2. Haiwei Xue (6 papers)
  3. Jialei Chen (24 papers)
  4. Yibo Yan (39 papers)
  5. Lutao Jiang (13 papers)
  6. Yuanhuiyi Lyu (25 papers)
  7. Kailun Yang (136 papers)
  8. Linfeng Zhang (160 papers)
  9. Xuming Hu (120 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.