Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cycle-Contrast for Self-Supervised Video Representation Learning (2010.14810v1)

Published 28 Oct 2020 in cs.CV and cs.LG

Abstract: We present Cycle-Contrastive Learning (CCL), a novel self-supervised method for learning video representation. Following a nature that there is a belong and inclusion relation of video and its frames, CCL is designed to find correspondences across frames and videos considering the contrastive representation in their domains respectively. It is different from recent approaches that merely learn correspondences across frames or clips. In our method, the frame and video representations are learned from a single network based on an R3D architecture, with a shared non-linear transformation for embedding both frame and video features before the cycle-contrastive loss. We demonstrate that the video representation learned by CCL can be transferred well to downstream tasks of video understanding, outperforming previous methods in nearest neighbour retrieval and action recognition tasks on UCF101, HMDB51 and MMAct.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Quan Kong (20 papers)
  2. Wenpeng Wei (4 papers)
  3. Ziwei Deng (2 papers)
  4. Tomoaki Yoshinaga (5 papers)
  5. Tomokazu Murakami (4 papers)
Citations (53)