Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Real-time Multiple Spatiotemporal Action Localisation and Prediction (1611.08563v6)

Published 25 Nov 2016 in cs.CV

Abstract: We present a deep-learning framework for real-time multiple spatio-temporal (S/T) action localisation, classification and early prediction. Current state-of-the-art approaches work offline and are too slow to be useful in real- world settings. To overcome their limitations we introduce two major developments. Firstly, we adopt real-time SSD (Single Shot MultiBox Detector) convolutional neural networks to regress and classify detection boxes in each video frame potentially containing an action of interest. Secondly, we design an original and efficient online algorithm to incrementally construct and label `action tubes' from the SSD frame level detections. As a result, our system is not only capable of performing S/T detection in real time, but can also perform early action prediction in an online fashion. We achieve new state-of-the-art results in both S/T action localisation and early action prediction on the challenging UCF101-24 and J-HMDB-21 benchmarks, even when compared to the top offline competitors. To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S/T action localisation and early action prediction on the untrimmed videos of UCF101-24.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gurkirt Singh (19 papers)
  2. Suman Saha (49 papers)
  3. Michael Sapienza (11 papers)
  4. Philip Torr (172 papers)
  5. Fabio Cuzzolin (57 papers)
Citations (277)

Summary

Online Real-time Multiple Spatiotemporal Action Localisation and Prediction

The paper presents an advanced deep learning framework designed to accomplish real-time spatial and temporal (S/T) action localization and classification within videos. This area of research addresses significant limitations observed in existing state-of-the-art methods that primarily function offline and at non-real-time speeds, making them impractical for immediate real-world applications such as video surveillance and human-robot interaction.

Methodology Overview

To surmount the drawbacks of preceding methodologies, the authors introduce two pivotal innovations:

  1. Adoption of SSD CNNs: The framework utilizes Single Shot Multibox Detector (SSD) convolutional neural networks to regress and classify detection boxes in each frame of a video that may contain an action of interest. This effectively removes the dependency on region proposal generation and offers a single-stage, end-to-end trainable model.
  2. Online Incremental Action Tube Construction: A novel and efficient algorithm is developed to incrementally construct and label 'action tubes' using detection boxes derived from SSD at the frame level. This method facilitates not only real-time S/T detection but also supports early action prediction in an online manner.

Performance and Results

The system sets new benchmarks for S/T action localization and early action prediction, as evidenced by its performance on rigorous datasets like UCF101-24 and J-HMDB-21. It achieves notable processing speeds, delivering up to 40 frames per second (fps), making it the first system capable of real-time online S/T action localization on untrimmed videos from the UCF101-24 dataset. Furthermore, empirical results demonstrate that the new framework improves detection accuracy over its offline counterparts.

The performance is rigorously assessed using several modes: RGB-only, real-time optical flow (RTF), and more accurate but computationally intensive optical flow (AF). While AF provides higher accuracy, both in fusion with RGB or standalone, the RTF mode strikes a balance by delivering near-competitive accuracy with significant speed advantages necessary for real-time application.

Implications and Future Work

This research not only presents an efficient framework for action localization capable of handling real-time streams but it also paves the way toward practical applications. The framework can be further enhanced by integrating faster detectors and possibly utilizing motion vectors for even more rapid operations. Furthermore, existing frameworks such as SSD can be substituted with other real-time capable models like YOLO, which might yield further speed improvements. The authors also suggest that incorporating more advanced online tracking methods could refine the tube generation process.

In conclusion, the paper's contributions are particularly relevant for enhancing autonomous systems requiring swift actions based on human activities, such as real-time monitoring systems, interactive robotics, and intelligent transport systems. The potential for deployment in real-world scenarios is substantial, given its ability to deliver accurate action recognition without sacrificing speed.

Youtube Logo Streamline Icon: https://streamlinehq.com