Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DMCL: Distillation Multiple Choice Learning for Multimodal Action Recognition (1912.10982v1)

Published 23 Dec 2019 in cs.CV

Abstract: In this work, we address the problem of learning an ensemble of specialist networks using multimodal data, while considering the realistic and challenging scenario of possible missing modalities at test time. Our goal is to leverage the complementary information of multiple modalities to the benefit of the ensemble and each individual network. We introduce a novel Distillation Multiple Choice Learning framework for multimodal data, where different modality networks learn in a cooperative setting from scratch, strengthening one another. The modality networks learned using our method achieve significantly higher accuracy than if trained separately, due to the guidance of other modalities. We evaluate this approach on three video action recognition benchmark datasets. We obtain state-of-the-art results in comparison to other approaches that work with missing modalities at test time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nuno C. Garcia (2 papers)
  2. Sarah Adel Bargal (29 papers)
  3. Vitaly Ablavsky (12 papers)
  4. Pietro Morerio (51 papers)
  5. Vittorio Murino (66 papers)
  6. Stan Sclaroff (56 papers)
Citations (43)

Summary

We haven't generated a summary for this paper yet.