Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weakly Supervised Dense Event Captioning in Videos (1812.03849v1)

Published 10 Dec 2018 in cs.CV

Abstract: Dense event captioning aims to detect and describe all events of interest contained in a video. Despite the advanced development in this area, existing methods tackle this task by making use of dense temporal annotations, which is dramatically source-consuming. This paper formulates a new problem: weakly supervised dense event captioning, which does not require temporal segment annotations for model training. Our solution is based on the one-to-one correspondence assumption, each caption describes one temporal segment, and each temporal segment has one caption, which holds in current benchmark datasets and most real-world cases. We decompose the problem into a pair of dual problems: event captioning and sentence localization and present a cycle system to train our model. Extensive experimental results are provided to demonstrate the ability of our model on both dense event captioning and sentence localization in videos.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xuguang Duan (3 papers)
  2. Wenbing Huang (95 papers)
  3. Chuang Gan (195 papers)
  4. Jingdong Wang (236 papers)
  5. Wenwu Zhu (104 papers)
  6. Junzhou Huang (137 papers)
Citations (140)

Summary

We haven't generated a summary for this paper yet.