Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Training Stronger Video Vision Transformers for EPIC-KITCHENS-100 Action Recognition (2106.05058v1)

Published 9 Jun 2021 in cs.CV

Abstract: With the recent surge in the research of vision transformers, they have demonstrated remarkable potential for various challenging computer vision applications, such as image recognition, point cloud classification as well as video understanding. In this paper, we present empirical results for training a stronger video vision transformer on the EPIC-KITCHENS-100 Action Recognition dataset. Specifically, we explore training techniques for video vision transformers, such as augmentations, resolutions as well as initialization, etc. With our training recipe, a single ViViT model achieves the performance of 47.4\% on the validation set of EPIC-KITCHENS-100 dataset, outperforming what is reported in the original paper by 3.4%. We found that video transformers are especially good at predicting the noun in the verb-noun action prediction task. This makes the overall action prediction accuracy of video transformers notably higher than convolutional ones. Surprisingly, even the best video transformers underperform the convolutional networks on the verb prediction. Therefore, we combine the video vision transformers and some of the convolutional video networks and present our solution to the EPIC-KITCHENS-100 Action Recognition competition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Ziyuan Huang (43 papers)
  2. Zhiwu Qing (29 papers)
  3. Xiang Wang (279 papers)
  4. Yutong Feng (33 papers)
  5. Shiwei Zhang (179 papers)
  6. Jianwen Jiang (25 papers)
  7. Zhurong Xia (4 papers)
  8. Mingqian Tang (23 papers)
  9. Nong Sang (86 papers)
  10. Marcelo H. Ang Jr (45 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.