Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear Video Transformer with Feature Fixation (2210.08164v1)

Published 15 Oct 2022 in cs.CV and cs.MM

Abstract: Vision Transformers have achieved impressive performance in video classification, while suffering from the quadratic complexity caused by the Softmax attention mechanism. Some studies alleviate the computational costs by reducing the number of tokens in attention calculation, but the complexity is still quadratic. Another promising way is to replace Softmax attention with linear attention, which owns linear complexity but presents a clear performance drop. We find that such a drop in linear attention results from the lack of attention concentration on critical features. Therefore, we propose a feature fixation module to reweight the feature importance of the query and key before computing linear attention. Specifically, we regard the query, key, and value as various latent representations of the input token, and learn the feature fixation ratio by aggregating Query-Key-Value information. This is beneficial for measuring the feature importance comprehensively. Furthermore, we enhance the feature fixation by neighborhood association, which leverages additional guidance from spatial and temporal neighbouring tokens. The proposed method significantly improves the linear attention baseline and achieves state-of-the-art performance among linear video Transformers on three popular video classification benchmarks. With fewer parameters and higher efficiency, our performance is even comparable to some Softmax-based quadratic Transformers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Kaiyue Lu (4 papers)
  2. Zexiang Liu (17 papers)
  3. Jianyuan Wang (24 papers)
  4. Weixuan Sun (31 papers)
  5. Zhen Qin (105 papers)
  6. Dong Li (429 papers)
  7. Xuyang Shen (23 papers)
  8. Hui Deng (133 papers)
  9. Xiaodong Han (19 papers)
  10. Yuchao Dai (123 papers)
  11. Yiran Zhong (75 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.