Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos (2104.12671v3)

Published 26 Apr 2021 in cs.CV

Abstract: Multimodal self-supervised learning is getting more and more attention as it allows not only to train large networks without human supervision but also to search and retrieve data across various modalities. In this context, this paper proposes a self-supervised training framework that learns a common multimodal embedding space that, in addition to sharing representations across different modalities, enforces a grouping of semantically similar instances. To this end, we extend the concept of instance-level contrastive learning with a multimodal clustering step in the training pipeline to capture semantic similarities across modalities. The resulting embedding space enables retrieval of samples across all modalities, even from unseen datasets and different domains. To evaluate our approach, we train our model on the HowTo100M dataset and evaluate its zero-shot retrieval capabilities in two challenging domains, namely text-to-video retrieval, and temporal action localization, showing state-of-the-art results on four different datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Brian Chen (21 papers)
  2. Andrew Rouditchenko (21 papers)
  3. Kevin Duarte (12 papers)
  4. Hilde Kuehne (69 papers)
  5. Samuel Thomas (42 papers)
  6. Angie Boggust (11 papers)
  7. Rameswar Panda (79 papers)
  8. Brian Kingsbury (54 papers)
  9. Rogerio Feris (105 papers)
  10. David Harwath (55 papers)
  11. James Glass (173 papers)
  12. Michael Picheny (32 papers)
  13. Shih-Fu Chang (131 papers)
Citations (83)

Summary

We haven't generated a summary for this paper yet.