Emergent Mind

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

(2010.11929)
Published Oct 22, 2020 in cs.CV , cs.AI , and cs.LG

Abstract

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

Large ViT models outperform BiT ResNets on bigger datasets, surpassing smaller ViTs as data size increases.

Overview

  • Transformers, which have excelled in NLP, are explored for image recognition tasks in comparison to traditional CNNs.

  • The Vision Transformer (ViT) introduces a method to process images using Transformers without convolutional layers, treating image patches as tokens.

  • ViT achieves superior performance on benchmarks like ImageNet when trained on large-scale datasets, surpassing state-of-the-art CNNs.

  • The paper discusses a 'hybrid' approach, combining CNN features with ViT, and analyses the model's data requirements and computational trade-offs.

  • ViT represents a shift in image recognition techniques, with potential applications in various domains, positing more research for future improvements.

Introduction

Transformers have been widely adopted for natural language processing tasks, with architectures such as BERT and GPT leading the way. Due to their parallelizability and scalability, Transformers are capable of handling very large models with hundreds of billions of parameters. Their success in NLP leads to the question of whether similar architectures could be successfully applied to computer vision tasks, where Convolutional Neural Networks (CNNs) have been the dominant approach.

Vision Transformer (ViT)

The paper introduces the Vision Transformer (ViT), a novel approach that utilizes a pure transformer without convolutional layers to process images. ViT approaches images by splitting them into a sequence of fixed-size patches, linearly embedding each of them, adding position embeddings, and feeding them into a standard Transformer encoder. This is analogous to treating image patches as tokens (words) in NLP, allowing the model to use the powerful self-attention mechanisms of Transformers to capture global dependencies between patches.

Scalability and Performance

The authors show that when trained on sufficient data, the inductive biases inherent in CNNs, such as translation equivariance and locality, become less critical. ViT, with large scale datasets, outperforms state-of-the-art CNNs on image recognition benchmarks. The research demonstrates that, on ImageNet, CIFAR-100, and others, the pre-training of ViT at scale leads to impressive results with fewer computational resources compared to CNNs when evaluated on transfer tasks with fewer data points.

Related Work and Innovations

The publication acknowledges related work where self-attention and Transformers have been applied in computer vision, but unlike those, the Vision Transformer does away with CNN architecture altogether. It elevates the transformer architecture to be directly applicable to image patches. Prior efforts had not scaled effectively due to specialized attention patterns, whereas ViT leverages the scalability of Transformers. It also discusses a 'hybrid' approach, where CNN feature maps are fed into the ViT, offering flexibility in integrating the Transformer into existing CNN architectures.

Empirical Analyses and Future Directions

The paper presents extensive empirical analyses that showcase the model's data requirements and the trade-offs between performance and computational cost. Different variants of Vision Transformers are assessed and exhibit better performance compared to ResNets given the same computational budget. The study also examines internal representations, attention patterns, and the role of the class token in ViT.

In conclusion, ViT marks a paradigm shift in image recognition, using Transformers effectively for the task. Open challenges include applying ViT to other vision tasks and further exploring self-supervised pre-training methods. Additionally, future scaling of ViT is likely to lead to even better performance. The potential applications of ViT span across a broad spectrum of domains that rely on image recognition, and continued research could further consolidate its position as a new standard in the field.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube