Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment Retrieval (2312.07435v1)

Published 12 Dec 2023 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Video moment retrieval is a challenging task requiring fine-grained interactions between video and text modalities. Recent work in image-text pretraining has demonstrated that most existing pretrained models suffer from information asymmetry due to the difference in length between visual and textual sequences. We question whether the same problem also exists in the video-text domain with an auxiliary need to preserve both spatial and temporal information. Thus, we evaluate a recently proposed solution involving the addition of an asymmetric co-attention network for video grounding tasks. Additionally, we incorporate momentum contrastive loss for robust, discriminative representation learning in both modalities. We note that the integration of these supplementary modules yields better performance compared to state-of-the-art models on the TACoS dataset and comparable results on ActivityNet Captions, all while utilizing significantly fewer parameters with respect to baseline.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Love Panta (3 papers)
  2. Prashant Shrestha (6 papers)
  3. Brabeem Sapkota (1 paper)
  4. Amrita Bhattarai (1 paper)
  5. Suresh Manandhar (16 papers)
  6. Anand Kumar Sah (3 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.