Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Video Mobile-Former: Video Recognition with Efficient Global Spatial-temporal Modeling (2208.12257v1)

Published 25 Aug 2022 in cs.CV

Abstract: Transformer-based models have achieved top performance on major video recognition benchmarks. Benefiting from the self-attention mechanism, these models show stronger ability of modeling long-range dependencies compared to CNN-based models. However, significant computation overheads, resulted from the quadratic complexity of self-attention on top of a tremendous number of tokens, limit the use of existing video transformers in applications with limited resources like mobile devices. In this paper, we extend Mobile-Former to Video Mobile-Former, which decouples the video architecture into a lightweight 3D-CNNs for local context modeling and a Transformer modules for global interaction modeling in a parallel fashion. To avoid significant computational cost incurred by computing self-attention between the large number of local patches in videos, we propose to use very few global tokens (e.g., 6) for a whole video in Transformers to exchange information with 3D-CNNs with a cross-attention mechanism. Through efficient global spatial-temporal modeling, Video Mobile-Former significantly improves the video recognition performance of alternative lightweight baselines, and outperforms other efficient CNN-based models at the low FLOP regime from 500M to 6G total FLOPs on various video recognition tasks. It is worth noting that Video Mobile-Former is the first Transformer-based video model which constrains the computational budget within 1G FLOPs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Rui Wang (996 papers)
  2. Zuxuan Wu (144 papers)
  3. Dongdong Chen (164 papers)
  4. Yinpeng Chen (55 papers)
  5. Xiyang Dai (53 papers)
  6. Mengchen Liu (48 papers)
  7. Luowei Zhou (31 papers)
  8. Lu Yuan (130 papers)
  9. Yu-Gang Jiang (223 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.