Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GLiT: Neural Architecture Search for Global and Local Image Transformer (2107.02960v3)

Published 7 Jul 2021 in cs.CV

Abstract: We introduce the first Neural Architecture Search (NAS) method to find a better transformer architecture for image recognition. Recently, transformers without CNN-based backbones are found to achieve impressive performance for image recognition. However, the transformer is designed for NLP tasks and thus could be sub-optimal when directly used for image recognition. In order to improve the visual representation ability for transformers, we propose a new search space and searching algorithm. Specifically, we introduce a locality module that models the local correlations in images explicitly with fewer computational cost. With the locality module, our search space is defined to let the search algorithm freely trade off between global and local information as well as optimizing the low-level design choice in each module. To tackle the problem caused by huge search space, a hierarchical neural architecture search method is proposed to search the optimal vision transformer from two levels separately with the evolutionary algorithm. Extensive experiments on the ImageNet dataset demonstrate that our method can find more discriminative and efficient transformer variants than the ResNet family (e.g., ResNet101) and the baseline ViT for image classification.

The paper "GLiT: Neural Architecture Search for Global and Local Image Transformer" presents a method for enhancing the architecture of transformers specifically for image recognition tasks through Neural Architecture Search (NAS). While transformers have showcased significant performance improvements in computer vision, their original design is tailored for NLP tasks, which may limit effectiveness in image-related applications.

To address this, the authors introduce a search space and algorithm designed to improve the visual representation capabilities of transformers. The core innovation of the paper is the locality module, which captures local image correlations with reduced computational demands. This module allows the search space to balance between global and local information while optimizing design choices at a low-level module-by-module basis.

One of the challenges is the vast search space posed by this free-form definition, making it computationally expensive and complex. The authors propose a hierarchical neural architecture search method that decomposes the search problem into two levels, handled separately by an evolutionary algorithm. This hierarchical approach efficiently navigates the search space to identify optimal transformer configurations for vision tasks.

The empirical validation of their approach is extensive. Using the ImageNet dataset, the authors demonstrate that their optimized transformers, named Global and Local Image Transformers (GLiTs), outperform established architectures such as the ResNet family (e.g., ResNet101) and the baseline Vision Transformer (ViT). The results indicate that the GLiTs are more discriminative and efficient, which underscores the efficacy of the proposed NAS method in finding superior transformer architectures for image recognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Boyu Chen (30 papers)
  2. Peixia Li (7 papers)
  3. Chuming Li (19 papers)
  4. Baopu Li (45 papers)
  5. Lei Bai (154 papers)
  6. Chen Lin (75 papers)
  7. Ming Sun (146 papers)
  8. Wanli Ouyang (358 papers)
  9. Junjie Yan (109 papers)
Citations (80)
Youtube Logo Streamline Icon: https://streamlinehq.com