Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding (1911.00232v4)

Published 1 Nov 2019 in cs.CV, cs.LG, and eess.IV

Abstract: Videos capture events that typically contain multiple sequential, and simultaneous, actions even in the span of only a few seconds. However, most large-scale datasets built to train models for action recognition in video only provide a single label per video. Consequently, models can be incorrectly penalized for classifying actions that exist in the videos but are not explicitly labeled and do not learn the full spectrum of information present in each video in training. Towards this goal, we present the Multi-Moments in Time dataset (M-MiT) which includes over two million action labels for over one million three second videos. This multi-label dataset introduces novel challenges on how to train and analyze models for multi-action detection. Here, we present baseline results for multi-action recognition using loss functions adapted for long tail multi-label learning, provide improved methods for visualizing and interpreting models trained for multi-label action detection and show the strength of transferring models trained on M-MiT to smaller datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Mathew Monfort (9 papers)
  2. Bowen Pan (16 papers)
  3. Kandan Ramakrishnan (8 papers)
  4. Alex Andonian (16 papers)
  5. Barry A McNamara (1 paper)
  6. Alex Lascelles (2 papers)
  7. Quanfu Fan (22 papers)
  8. Dan Gutfreund (20 papers)
  9. Rogerio Feris (105 papers)
  10. Aude Oliva (42 papers)
Citations (63)