Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FILS: Self-Supervised Video Feature Prediction In Semantic Language Space (2406.03447v1)

Published 5 Jun 2024 in cs.CV, cs.AI, and cs.LG
FILS: Self-Supervised Video Feature Prediction In Semantic Language Space

Abstract: This paper demonstrates a self-supervised approach for learning semantic video representations. Recent vision studies show that a masking strategy for vision and natural language supervision has contributed to developing transferable visual pretraining. Our goal is to achieve a more semantic video representation by leveraging the text related to the video content during the pretraining in a fully self-supervised manner. To this end, we present FILS, a novel self-supervised video Feature prediction In semantic Language Space (FILS). The vision model can capture valuable structured information by correctly predicting masked feature semantics in language space. It is learned using a patch-wise video-text contrastive strategy, in which the text representations act as prototypes for transforming vision features into a language space, which are then used as targets for semantically meaningful feature prediction using our masked encoder-decoder structure. FILS demonstrates remarkable transferability on downstream action recognition tasks, achieving state-of-the-art on challenging egocentric datasets, like Epic-Kitchens, Something-SomethingV2, Charades-Ego, and EGTEA, using ViT-Base. Our efficient method requires less computation and smaller batches compared to previous works.

FILS: Self-Supervised Video Feature Prediction In Semantic Language Space

This paper introduces FILS, a novel self-supervised learning framework for video representation that operates in semantic language space. FILS extends the paradigm of contrastive language-vision pretraining, notably used in CLIP, to the domain of videos by incorporating a masked feature prediction methodology within a language-semantic context. This work leverages natural language descriptions as a supervisory signal, not only embedding video features into a language-aligned space but also guiding the masked feature reconstruction process towards more semantically meaningful outcomes.

Key Contributions

  1. Self-Supervised Video Pretraining: FILS employs a self-supervised approach, focusing on video understanding by predicting masked video features in a semantic language space. This prediction is guided by natural language descriptions, providing an additional semantic layer missing in typical video processing pipelines that rely on pure pixel-level reconstruction.
  2. Contrastive Learning on Action Patches: The model enhances video-text alignment by performing contrastive learning between video segments within identified action areas and their corresponding natural language descriptions. This is operationalized through a novel component named ActCLIP, which focuses contrastive learning efforts on spatial regions of video frames where significant actions occur.
  3. Efficient Training: FILS's methodology allows for scalable and efficient model training, with reduced computational overhead due to optimized contrastive losses and masking strategies. The model achieves state-of-the-art performance with lower memory requirements and batch sizes compared to its predecessors, thus demonstrating both effectiveness and efficiency.

Empirical Evaluation

FILS shows strong empirical results across several challenging video action recognition benchmarks, including Epic-Kitchens, Something-Something V2, Charades-Ego, and EGTEA. Notably, FILS attains state-of-the-art performance in action recognition tasks while being pretrained on comparatively smaller datasets. The qualitative examples provided illustrate the model's ability to focus attention on meaningful semantic regions of videos.

Implications and Future Directions

The seamless integration of visual and textual modalities enabled by FILS suggests several potential applications beyond action recognition, including video captioning and visual question answering within video contexts. Further exploration is merited in optimizing the semantic richness and efficiency of video representations. Possible future extensions could involve scaling the architecture with larger pretraining datasets and advanced transformer architectures to enhance generalization and fine-grained semantic understanding.

In conclusion, FILS represents a significant step forward in self-supervised video representation learning by combining masked reconstruction with semantic language guidance, offering a viable pathway for more intelligent video understanding systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mona Ahmadian (3 papers)
  2. Frank Guerin (30 papers)
  3. Andrew Gilbert (44 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com