Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Global Diversity and Local Context for Video Summarization (2201.11345v2)

Published 27 Jan 2022 in cs.CV and cs.AI

Abstract: Video summarization aims to automatically generate a diverse and concise summary which is useful in large-scale video processing. Most of the methods tend to adopt self-attention mechanism across video frames, which fails to model the diversity of video frames. To alleviate this problem, we revisit the pairwise similarity measurement in self-attention mechanism and find that the existing inner-product affinity leads to discriminative features rather than diversified features. In light of this phenomenon, we propose global diverse attention which uses the squared Euclidean distance instead to compute the affinities. Moreover, we model the local contextual information by novel local contextual attention to remove the redundancy in the video. By combining these two attention mechanisms, a video SUMmarization model with Diversified Contextual Attention scheme is developed, namely SUM-DCA. Extensive experiments are conducted on benchmark data sets to verify the effectiveness and the superiority of SUM-DCA in terms of F-score and rank-based evaluation without any bells and whistles.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yingchao Pan (1 paper)
  2. Ouhan Huang (5 papers)
  3. Qinghao Ye (31 papers)
  4. Zhongjin Li (1 paper)
  5. Wenjiang Wang (1 paper)
  6. Guodun Li (5 papers)
  7. Yuxing Chen (29 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.