Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning by Aligning Videos in Time (2103.17260v2)

Published 31 Mar 2021 in cs.CV

Abstract: We present a self-supervised approach for learning video representations using temporal video alignment as a pretext task, while exploiting both frame-level and video-level information. We leverage a novel combination of temporal alignment loss and temporal regularization terms, which can be used as supervision signals for training an encoder network. Specifically, the temporal alignment loss (i.e., Soft-DTW) aims for the minimum cost for temporally aligning videos in the embedding space. However, optimizing solely for this term leads to trivial solutions, particularly, one where all frames get mapped to a small cluster in the embedding space. To overcome this problem, we propose a temporal regularization term (i.e., Contrastive-IDM) which encourages different frames to be mapped to different points in the embedding space. Extensive evaluations on various tasks, including action phase classification, action phase progression, and fine-grained frame retrieval, on three datasets, namely Pouring, Penn Action, and IKEA ASM, show superior performance of our approach over state-of-the-art methods for self-supervised representation learning from videos. In addition, our method provides significant performance gain where labeled data is lacking. Our code and labels are available on our research website: https://retrocausal.ai/research/

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sanjay Haresh (8 papers)
  2. Sateesh Kumar (6 papers)
  3. Huseyin Coskun (16 papers)
  4. Shahram Najam Syed (3 papers)
  5. Andrey Konin (7 papers)
  6. Muhammad Zeeshan Zia (1 paper)
  7. Quoc-Huy Tran (18 papers)
Citations (57)