Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Searching the Search Space of Vision Transformer (2111.14725v1)

Published 29 Nov 2021 in cs.CV

Abstract: Vision Transformer has shown great visual representation power in substantial vision tasks such as recognition and detection, and thus been attracting fast-growing efforts on manually designing more effective architectures. In this paper, we propose to use neural architecture search to automate this process, by searching not only the architecture but also the search space. The central idea is to gradually evolve different search dimensions guided by their E-T Error computed using a weight-sharing supernet. Moreover, we provide design guidelines of general vision transformers with extensive analysis according to the space searching process, which could promote the understanding of vision transformer. Remarkably, the searched models, named S3 (short for Searching the Search Space), from the searched space achieve superior performance to recently proposed models, such as Swin, DeiT and ViT, when evaluated on ImageNet. The effectiveness of S3 is also illustrated on object detection, semantic segmentation and visual question answering, demonstrating its generality to downstream vision and vision-language tasks. Code and models will be available at https://github.com/microsoft/Cream.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Minghao Chen (37 papers)
  2. Kan Wu (42 papers)
  3. Bolin Ni (11 papers)
  4. Houwen Peng (36 papers)
  5. Bei Liu (63 papers)
  6. Jianlong Fu (91 papers)
  7. Hongyang Chao (34 papers)
  8. Haibin Ling (142 papers)
Citations (45)

Summary

We haven't generated a summary for this paper yet.