Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning (1804.05448v1)

Published 15 Apr 2018 in cs.CL, cs.AI, and cs.CV

Abstract: A major challenge for video captioning is to combine audio and visual cues. Existing multi-modal fusion methods have shown encouraging results in video understanding. However, the temporal structures of multiple modalities at different granularities are rarely explored, and how to selectively fuse the multi-modal representations at different levels of details remains uncharted. In this paper, we propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal dynamics of different modalities. Furthermore, for the first time, we validate the superior performance of the deep audio features on the video captioning task. Finally, our HACA model significantly outperforms the previous best systems and achieves new state-of-the-art results on the widely used MSR-VTT dataset.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Xin Wang (1307 papers)
  2. Yuan-Fang Wang (18 papers)
  3. William Yang Wang (254 papers)
Citations (75)

Summary

We haven't generated a summary for this paper yet.