Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision Transformers: State of the Art and Research Challenges (2207.03041v1)

Published 7 Jul 2022 in cs.CV

Abstract: Transformers have achieved great success in natural language processing. Due to the powerful capability of self-attention mechanism in transformers, researchers develop the vision transformers for a variety of computer vision tasks, such as image recognition, object detection, image segmentation, pose estimation, and 3D reconstruction. This paper presents a comprehensive overview of the literature on different architecture designs and training tricks (including self-supervised learning) for vision transformers. Our goal is to provide a systematic review with the open research opportunities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Bo-Kai Ruan (8 papers)
  2. Hong-Han Shuai (56 papers)
  3. Wen-Huang Cheng (40 papers)
Citations (13)