Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SlowFast Networks for Video Recognition (1812.03982v3)

Published 10 Dec 2018 in cs.CV
SlowFast Networks for Video Recognition

Abstract: We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/facebookresearch/SlowFast

SlowFast Networks for Video Recognition

Introduction

The paper introduces the SlowFast network architecture for video recognition. This model comprises two distinct pathways: a Slow pathway operating at a low frame rate to capture spatial semantics, and a Fast pathway operating at a high frame rate to capture fine-grained motion information. This duality leverages the complementary strengths of high temporal and spatial resolution, achieving state-of-the-art performance on major video recognition benchmarks.

Architecture Design

The Slow pathway, designed to process fewer frames, captures detailed spatial information. Meanwhile, the Fast pathway processes more frames, focusing on dynamic motion while being computationally lightweight due to reduced channel size. This division allows the Fast pathway to enhance temporal fidelity without excessive computational burden.

Key Components

  1. Slow Pathway: Processes one frame out of every τ\tau frames (e.g., τ\tau=16), thus focusing on spatial information over time.
  2. Fast Pathway: Samples at a higher frame rate (e.g., every 2 frames) but with fewer channels, concentrating on transient motion dynamics.
  3. Lateral Connections: These connections fuse information from the Fast to the Slow pathway, ensuring that high-temporal-resolution features enhance the spatially rich features of the Slow pathway.

Methodology

Training and Inference

The training from scratch approach distinguishes this work from many existing models that leverage ImageNet pre-training. The authors adopted a synchronized SGD training regime, utilizing a cosine learning rate decay schedule. For inference, the model samples 10 clips from a video and performs spatially center-cropped evaluations to maintain computational efficiency.

Datasets

Evaluations were conducted on Kinetics-400, Kinetics-600, Charades, and AVA datasets. These datasets provide a comprehensive benchmark suite, covering short-term actions and long-term activities, as well as spatially localized atomic actions.

Performance and Results

Kinetics Datasets

The SlowFast models set new records on the Kinetics-400 and Kinetics-600 benchmarks. On Kinetics-400, the top-performing SlowFast variant achieved 79.8% top-1 accuracy, outperforming the nearest competitor by a significant margin. For Kinetics-600, the model maintained its competitive edge with 81.8% accuracy.

Charades

SlowFast models also demonstrated strong performance on the Charades dataset, with the best model attaining 45.2 mAP when pre-trained on Kinetics-600. This is notable given the multi-label nature of Charades, reflecting the model’s robustness in capturing long-range temporal dependencies.

AVA Action Detection

On the AVA dataset, which focuses on spatiotemporal action detection, the SlowFast model achieved a notable mAP improvement by 5.2 points over the baseline Slow model, emphasizing its strength in dynamic scene understanding. The best-performing model achieved 28.2 mAP on AVA v2.1, setting a new standard for action detection accuracy.

Ablation Studies

The paper includes comprehensive ablation studies to validate the design choices of the SlowFast architecture. Key findings include:

  • Channel Capacity: The reduced channel capacity of the Fast pathway is crucial for maintaining computational efficiency without sacrificing accuracy.
  • Lateral Connections: Different methods of lateral fusion (such as time-to-channel and time-strided convolution) were experimented with, highlighting the efficacy of the T-conv method.
  • Weaker Spatial Inputs: Variants using grayscale, optically flowed, or time-difference frames were also tested, showing competitive results and validating the robustness of the SlowFast design.

Implications and Future Directions

Theoretical Implications

The dichotomy of slow and fast pathways in the SlowFast architecture aligns with principles observed in biological vision systems, specifically the Parvo- and Magnocellular pathways. This suggests promising avenues for future work exploring biologically inspired architectures in video recognition.

Practical Implications

Practically, the SlowFast architecture’s modest computational requirements relative to its performance make it a compelling choice for real-world applications, such as video surveillance, autonomous driving, and human-computer interaction, where both accuracy and efficiency are critical.

Future Developments

Future research can explore:

  1. Pathway Enhancement: Further enhancements to each pathway (e.g., dedicated modules for different motion types).
  2. Extended Fusion Techniques: Novel fusion techniques that dynamically adjust the contributions of each pathway based on the content.
  3. Cross-Modal Learning: Integrating audio and text modalities to improve context understanding in videos.

Conclusion

The SlowFast networks presented in this paper represent a significant advancement in video recognition, achieving state-of-the-art performance while maintaining computational efficiency. The architecture’s ability to balance spatial and temporal information offers a robust framework for future research and application in diverse video analysis domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Christoph Feichtenhofer (52 papers)
  2. Haoqi Fan (33 papers)
  3. Jitendra Malik (210 papers)
  4. Kaiming He (71 papers)
Citations (2,965)
Youtube Logo Streamline Icon: https://streamlinehq.com