Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SPARTAN: Self-supervised Spatiotemporal Transformers Approach to Group Activity Recognition (2303.12149v4)

Published 6 Mar 2023 in cs.CV

Abstract: In this paper, we propose a new, simple, and effective Self-supervised Spatio-temporal Transformers (SPARTAN) approach to Group Activity Recognition (GAR) using unlabeled video data. Given a video, we create local and global Spatio-temporal views with varying spatial patch sizes and frame rates. The proposed self-supervised objective aims to match the features of these contrasting views representing the same video to be consistent with the variations in spatiotemporal domains. To the best of our knowledge, the proposed mechanism is one of the first works to alleviate the weakly supervised setting of GAR using the encoders in video transformers. Furthermore, using the advantage of transformer models, our proposed approach supports long-term relationship modeling along spatio-temporal dimensions. The proposed SPARTAN approach performs well on two group activity recognition benchmarks, including NBA and Volleyball datasets, by surpassing the state-of-the-art results by a significant margin in terms of MCA and MPCA metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Naga VS Raviteja Chappa (6 papers)
  2. Pha Nguyen (17 papers)
  3. Alexander H Nelson (2 papers)
  4. Han-Seok Seo (5 papers)
  5. Xin Li (980 papers)
  6. Page Daniel Dobbs (5 papers)
  7. Khoa Luu (89 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.