Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformers Meet Visual Learning Understanding: A Comprehensive Review (2203.12944v1)

Published 24 Mar 2022 in cs.CV

Abstract: Dynamic attention mechanism and global modeling ability make Transformer show strong feature learning ability. In recent years, Transformer has become comparable to CNNs methods in computer vision. This review mainly investigates the current research progress of Transformer in image and video applications, which makes a comprehensive overview of Transformer in visual learning understanding. First, the attention mechanism is reviewed, which plays an essential part in Transformer. And then, the visual Transformer model and the principle of each module are introduced. Thirdly, the existing Transformer-based models are investigated, and their performance is compared in visual learning understanding applications. Three image tasks and two video tasks of computer vision are investigated. The former mainly includes image classification, object detection, and image segmentation. The latter contains object tracking and video classification. It is significant for comparing different models' performance in various tasks on several public benchmark data sets. Finally, ten general problems are summarized, and the developing prospects of the visual Transformer are given in this review.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuting Yang (45 papers)
  2. Licheng Jiao (109 papers)
  3. Xu Liu (213 papers)
  4. Fang Liu (800 papers)
  5. Shuyuan Yang (36 papers)
  6. Zhixi Feng (7 papers)
  7. Xu Tang (48 papers)
Citations (28)