Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised Spatiotemporal Representation Learning by Exploiting Video Continuity (2112.05883v3)

Published 11 Dec 2021 in cs.CV and cs.LG

Abstract: Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e.g. speed, temporal order, etc. This work exploits an essential yet under-explored property of videos, the video continuity, to obtain supervision signals for self-supervised representation learning. Specifically, we formulate three novel continuity-related pretext tasks, i.e. continuity justification, discontinuity localization, and missing section approximation, that jointly supervise a shared backbone for video representation learning. This self-supervision approach, termed as Continuity Perception Network (CPNet), solves the three tasks altogether and encourages the backbone network to learn local and long-ranged motion and context representations. It outperforms prior arts on multiple downstream tasks, such as action recognition, video retrieval, and action localization. Additionally, the video continuity can be complementary to other coarse-grained video properties for representation learning, and integrating the proposed pretext task to prior arts can yield much performance gains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hanwen Liang (10 papers)
  2. Niamul Quader (4 papers)
  3. Zhixiang Chi (13 papers)
  4. Lizhe Chen (4 papers)
  5. Peng Dai (46 papers)
  6. Juwei Lu (13 papers)
  7. Yang Wang (672 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.