Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Video Swin Transformer (2106.13230v1)

Published 24 Jun 2021 in cs.CV, cs.AI, and cs.LG
Video Swin Transformer

Abstract: The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks. These video models are all built on Transformer layers that globally connect patches across the spatial and temporal dimensions. In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the Swin Transformer designed for the image domain, while continuing to leverage the power of pre-trained image models. Our approach achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including on action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2). The code and models will be made publicly available at https://github.com/SwinTransformer/Video-Swin-Transformer.

An Analysis of "Video Swin Transformer"

The paper "Video Swin Transformer" presents an innovative approach to video recognition by adapting the Swin Transformer, originally designed for image recognition, to handle video data. The authors propose a novel architecture that introduces an inductive bias of locality in video Transformers to balance speed and accuracy efficiently.

Introduction and Motivation

The landscape of visual modeling has seen a significant shift from Convolutional Neural Networks (CNNs) to Transformer-based architectures. Pioneering models like Vision Transformer (ViT) demonstrated that Transformer architectures could outperform CNNs on image recognition tasks by globally modeling spatial relationships. This paper builds on this premise but recognizes that extending such global self-attention mechanisms naively to videos incurs prohibitive computation costs. Therefore, the authors advocate for an inductive bias of locality to efficiently scale Transformers to video tasks.

Architecture

The proposed Video Swin Transformer adapts the Swin Transformer for videos by leveraging the inherent spatiotemporal locality within video frames. The central idea is that pixels close in spatiotemporal distance have higher correlation, allowing for efficient local self-attention computations.

Key Architectural Components

  1. 3D Patch Partitioning: The video input is partitioned into non-overlapping 3D patches, which are then embedded into a higher-dimensional space.
  2. Hierarchical Structure: Following the original Swin Transformer, the video model employs a hierarchical architecture with 2× spatial downsampling at each stage.
  3. 3D Shifted Window Based Multi-Head Self-Attention (MSA): This mechanism introduces locality by computing self-attention within non-overlapping 3D windows. To introduce cross-window connections, the windows are periodically shifted, akin to Swin Transformer's approach for images but extended to the spatiotemporal domain.
  4. Relative Position Bias: A 3D relative position bias is incorporated into the self-attention mechanism to account for spatial and temporal relationships more effectively.

Variants and Initialization

The authors explore several variants of the architecture, designated as Swin-T, Swin-S, Swin-B, and Swin-L, varying in model size and computational complexity. The model benefits from strong initialization by leveraging weights pre-trained on large-scale image datasets like ImageNet-21K.

Empirical Results

The proposed Video Swin Transformer achieves state-of-the-art performance on benchmark video recognition datasets, including Kinetics-400 (K400), Kinetics-600 (K600), and Something-Something v2 (SSv2).

Key Results:

  • Kinetics-400: Achieves 84.9% top-1 accuracy, outperforming previous state-of-the-art models like ViViT-H with significantly less pre-training data and smaller model size.
  • Kinetics-600: Similar remarkable performance with an 86.1% top-1 accuracy.
  • Something-Something v2: Demonstrates strong temporal modeling capabilities with a top-1 accuracy of 69.6%.

Implications and Future Directions

The Video Swin Transformer demonstrates that incorporating locality in transformer architectures is beneficial for video tasks, leading to improvements in computational efficiency and model performance. The findings suggest several directions for future research:

  1. Scalability: Further investigation into scaling the temporal dimension for longer video sequences while maintaining computational efficiency.
  2. Initialization: Exploring advanced strategies for utilizing pre-trained image model weights, particularly focusing on the differences between inflate and center initialization methods.
  3. Temporal Dynamics: Enhanced modeling of complex temporal dynamics, possibly incorporating a more nuanced handling of temporal attention mechanisms.

Conclusion

The proposed Video Swin Transformer marks a significant advancement in video recognition. By capitalizing on the spatiotemporal locality, the model achieves a superior speed-accuracy trade-off, paving the way for more efficient and effective video Transformer models. The public availability of the code and models further ensures that this approach can be a foundation for future research and development in video AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ze Liu (42 papers)
  2. Jia Ning (7 papers)
  3. Yue Cao (147 papers)
  4. Yixuan Wei (16 papers)
  5. Zheng Zhang (486 papers)
  6. Stephen Lin (72 papers)
  7. Han Hu (196 papers)
Citations (1,293)
Github Logo Streamline Icon: https://streamlinehq.com