Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TubeR: Tubelet Transformer for Video Action Detection (2104.00969v5)

Published 2 Apr 2021 in cs.CV

Abstract: We propose TubeR: a simple solution for spatio-temporal video action detection. Different from existing methods that depend on either an off-line actor detector or hand-designed actor-positional hypotheses like proposals or anchors, we propose to directly detect an action tubelet in a video by simultaneously performing action localization and recognition from a single representation. TubeR learns a set of tubelet-queries and utilizes a tubelet-attention module to model the dynamic spatio-temporal nature of a video clip, which effectively reinforces the model capacity compared to using actor-positional hypotheses in the spatio-temporal space. For videos containing transitional states or scene changes, we propose a context aware classification head to utilize short-term and long-term context to strengthen action classification, and an action switch regression head for detecting the precise temporal action extent. TubeR directly produces action tubelets with variable lengths and even maintains good results for long video clips. TubeR outperforms the previous state-of-the-art on commonly used action detection datasets AVA, UCF101-24 and JHMDB51-21.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Jiaojiao Zhao (15 papers)
  2. Yanyi Zhang (10 papers)
  3. Xinyu Li (136 papers)
  4. Hao Chen (1006 papers)
  5. Shuai Bing (1 paper)
  6. Mingze Xu (28 papers)
  7. Chunhui Liu (23 papers)
  8. Kaustav Kundu (9 papers)
  9. Yuanjun Xiong (52 papers)
  10. Davide Modolo (30 papers)
  11. Ivan Marsic (17 papers)
  12. Cees G. M. Snoek (134 papers)
  13. Joseph Tighe (30 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.