Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-centric Spatio-Temporal Video Grounding With Visual Transformers (2011.05049v2)

Published 10 Nov 2020 in cs.CV, cs.AI, and cs.MM

Abstract: In this work, we introduce a novel task - Humancentric Spatio-Temporal Video Grounding (HC-STVG). Unlike the existing referring expression tasks in images or videos, by focusing on humans, HC-STVG aims to localize a spatiotemporal tube of the target person from an untrimmed video based on a given textural description. This task is useful, especially for healthcare and security-related applications, where the surveillance videos can be extremely long but only a specific person during a specific period of time is concerned. HC-STVG is a video grounding task that requires both spatial (where) and temporal (when) localization. Unfortunately, the existing grounding methods cannot handle this task well. We tackle this task by proposing an effective baseline method named Spatio-Temporal Grounding with Visual Transformers (STGVT), which utilizes Visual Transformers to extract cross-modal representations for video-sentence matching and temporal localization. To facilitate this task, we also contribute an HC-STVG dataset consisting of 5,660 video-sentence pairs on complex multi-person scenes. Specifically, each video lasts for 20 seconds, pairing with a natural query sentence with an average of 17.25 words. Extensive experiments are conducted on this dataset, demonstrating the newly-proposed method outperforms the existing baseline methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zongheng Tang (4 papers)
  2. Yue Liao (35 papers)
  3. Si Liu (132 papers)
  4. Guanbin Li (177 papers)
  5. Xiaojie Jin (51 papers)
  6. Hongxu Jiang (9 papers)
  7. Qian Yu (116 papers)
  8. Dong Xu (167 papers)
Citations (79)

Summary

We haven't generated a summary for this paper yet.