Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Learning by Cross-Modal Audio-Video Clustering (1911.12667v3)

Published 28 Nov 2019 in cs.CV

Abstract: Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.

Cross-Modal Audio-Video Clustering for Self-Supervised Learning

The paper "Self-Supervised Learning by Cross-Modal Audio-Video Clustering" introduces an innovative approach to self-supervised learning by leveraging the semantic correlations and intrinsic differences between audio and video modalities. This method, entitled Cross-Modal Deep Clustering (XDC), presents a compelling framework for training video models without relying on labeled datasets, which addresses significant challenges in the scalability and label-space definition of action recognition.

Framework Overview

XDC is built on the premise that visual and audio channels, while correlated, encapsulate unique information sets. The method proposes using unsupervised clustering on one modality to act as the supervisory signal for another modality. For instance, audio clustering results serve as pseudo-labels for refining video representations and vice versa. This approach takes advantage of cross-modal predictions, enhancing the richness of the learned representations compared to traditional within-modality learning.

Three primary multi-modal deep clustering approaches are outlined in the paper:

  1. Multi-Head Deep Clustering (MDC): Extends single-modality DeepCluster to a multi-modal context by adding a second classification head for cross-modal supervision.
  2. Concatenation Deep Clustering (CDC): Forms joint features by concatenating normalized audio-visual features before clustering.
  3. Cross-Modal Deep Clustering (XDC): Utilizes clusters from one modality as an exclusive supervisory signal for training the encoder of the other modality, which emerged as the most effective among the three tested approaches.

Results and Comparisons

Empirical evaluations demonstrate that XDC significantly outperforms not only baseline models trained from scratch but also fully-supervised pretraining on large datasets like ImageNet and Kinetics—particularly for action recognition tasks on HMDB51 and UCF101 benchmarks. Key results include:

  • UCF101 Accuracy: XDC achieved 95.5% accuracy when pretrained on IG-Kinetics, indicating its efficacy over other self-supervised methods and Kinetics-supervised baselines.
  • HMDB51 Accuracy: The approach delivered 68.9% accuracy, again surpassing supervised methods.

Implications and Future Work

The successful application of XDC in outperforming fully-supervised methods suggests a paradigm shift in representation learning for video data. The methodology's reliance on uncurated video datasets like IG-Random emphasizes its potential scalability across various domains without necessitating extensive labeled data.

In terms of real-world implications, the deployment of XDC-like mechanisms could democratize the access to powerful action recognition models, mitigating the need for expensive and labor-intensive dataset construction processes. The theoretical implications extend to a re-evaluation of modality interaction within machine learning frameworks, promoting further exploration into the complementarities of different data modalities, beyond video and audio.

Future lines of inquiry could delve into the extension of XDC to additional modalities, such as text or sensor data, broadening the application of this self-supervision principle. Furthermore, exploring adaptive clustering strategies or more intricate integration with temporal learning dynamics could refine the representational capacity of cross-modal models.

Conclusion

The XDC approach sets a new benchmark in self-supervised learning, successfully leveraging cross-modal cues for video and audio representation learning. By highlighting both the potential and performance of such methods over traditional supervised learning paradigms, this work paves the way for more efficient and scalable model training, firmly establishing cross-modal clustering as a key strategy for future machine learning advancements.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Humam Alwassel (9 papers)
  2. Dhruv Mahajan (38 papers)
  3. Bruno Korbar (9 papers)
  4. Lorenzo Torresani (73 papers)
  5. Bernard Ghanem (256 papers)
  6. Du Tran (28 papers)
Citations (413)