Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aggregated Pyramid Vision Transformer: Split-transform-merge Strategy for Image Recognition without Convolutions (2203.00960v1)

Published 2 Mar 2022 in cs.CV

Abstract: With the achievements of Transformer in the field of natural language processing, the encoder-decoder and the attention mechanism in Transformer have been applied to computer vision. Recently, in multiple tasks of computer vision (image classification, object detection, semantic segmentation, etc.), state-of-the-art convolutional neural networks have introduced some concepts of Transformer. This proves that Transformer has a good prospect in the field of image recognition. After Vision Transformer was proposed, more and more works began to use self-attention to completely replace the convolutional layer. This work is based on Vision Transformer, combined with the pyramid architecture, using Split-transform-merge to propose the group encoder and name the network architecture Aggregated Pyramid Vision Transformer (APVT). We perform image classification tasks on the CIFAR-10 dataset and object detection tasks on the COCO 2017 dataset. Compared with other network architectures that use Transformer as the backbone, APVT has excellent results while reducing the computational cost. We hope this improved strategy can provide a reference for future Transformer research in computer vision.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rui-Yang Ju (19 papers)
  2. Ting-Yu Lin (6 papers)
  3. Jen-Shiun Chiang (16 papers)
  4. Jia-Hao Jian (3 papers)
  5. Yu-Shian Lin (5 papers)
  6. Liu-Rui-Yi Huang (1 paper)
Citations (1)