Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels (2406.09415v1)

Published 13 Jun 2024 in cs.CV and cs.LG

Abstract: This work does not introduce a new method. Instead, we present an interesting finding that questions the necessity of the inductive bias -- locality in modern computer vision architectures. Concretely, we find that vanilla Transformers can operate by directly treating each individual pixel as a token and achieve highly performant results. This is substantially different from the popular design in Vision Transformer, which maintains the inductive bias from ConvNets towards local neighborhoods (e.g. by treating each 16x16 patch as a token). We mainly showcase the effectiveness of pixels-as-tokens across three well-studied tasks in computer vision: supervised learning for object classification, self-supervised learning via masked autoencoding, and image generation with diffusion models. Although directly operating on individual pixels is less computationally practical, we believe the community must be aware of this surprising piece of knowledge when devising the next generation of neural architectures for computer vision.

Exploring Transformers on Individual Pixels: A Detailed Analysis

Introduction

The paper "An Image is Worth More Than 16×16 Patches: Exploring Transformers on Individual Pixels" by Duy-Kien Nguyen et al. investigates the role of the inductive bias of 'locality' in vision architectures. Specifically, it evaluates whether treating individual pixels as tokens in a Transformer model can be effective for various computer vision tasks, contrary to the conventional approach which groups pixels into patches.

Methodology and Model

The authors introduce Pixel Transformer (PiT), an adaptation of the vanilla Transformer that directly uses individual pixels as tokens, unlike the Vision Transformer (ViT), which operates on 16×16 pixel patches. This design eliminates locality—a bias that assumes neighboring pixels are more related than distant ones—and replaces it with a purely data-driven approach.

PiT, like a standard Transformer, includes multi-headed Self-Attention and MLP blocks. The key difference is that each token corresponds to a single pixel, requiring the Transformer to infer all spatial relationships from scratch without priors on the 2D structure of the image.

Empirical Evaluation

The paper reports empirical results across three widely-studied tasks in computer vision: supervised learning, self-supervised learning via Masked Autoencoding (MAE), and image generation with Diffusion Transformers (DiT).

Supervised Learning

For supervised learning, the authors benchmark PiT on CIFAR-100 and ImageNet datasets. The evaluation metrics are top-1 (Acc@1) and top-5 accuracy (Acc@5). On CIFAR-100, PiT outperformed ViT counterparts with a notable improvement, achieving up to 86.4\% Acc@1 compared to 83.7\% for ViT-S at the same model scale. Similar improvements were observed on the ImageNet dataset, indicating that PiTs scale effectively and yield better results as model capacity increases.

Self-Supervised Learning

For self-supervised learning, PiT was evaluated using MAE on CIFAR-100. MAE pre-training, followed by supervised fine-tuning, showed that PiT had consistent performance gain. For instance, PiT-S achieved an Acc@1 of 87.7\%, up from 86.4\% when trained from scratch, further outperforming the ViT-S baseline.

Image Generation

In the domain of image generation, PiTs were tested using a latent space approach from VQGAN combined with DiT on ImageNet. The results were evaluated using FID, sFID, IS, and precision/recall. PiT-L demonstrated compatibility and even outperformance in generating high-quality images compared to DiT-L, achieving a lower FID score of 4.05 versus 4.16 for DiT-L.

Analysis on Locality Designs

The paper additionally revisits the impact of locality designs such as position embedding and patchification in ViTs. It finds that while learnable position embeddings have minimal effect on performance, the act of patchification introduces a stronger locality bias. Experiments involving pixel permutation within patches demonstrate a significant performance drop when locality is disrupted, indicating the inherent sensitivity and reliance on spatial structure.

Future Implications

The findings in this paper put forward significant implications for the future of vision model designs. While PiTs showcase that models can operate without locality inductive biases, practical limitations such as computational efficiency remain. However, with advancements in efficient Self-Attention mechanisms, the potential scalability argues for further exploration.

Theoretically, this paper contributes to the understanding that inductive biases, long considered fundamental, can be re-evaluated and potentially omitted in modern AI architectures. Practically, it signifies that next-generation models might leverage fewer pre-set assumptions, thereby generalizing better across varied tasks and data modalities.

Conclusion

This investigation challenges foundational assumptions in vision architectures by not only questioning but showing locality is not indispensable. This work encourages a shift towards more flexible, data-driven inductive biases in designing neural architectures for computer vision, promoting models that can learn purely from data without relying on predefined structures.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Duy-Kien Nguyen (8 papers)
  2. Mahmoud Assran (20 papers)
  3. Unnat Jain (25 papers)
  4. Martin R. Oswald (69 papers)
  5. Cees G. M. Snoek (134 papers)
  6. Xinlei Chen (106 papers)
Citations (4)
Youtube Logo Streamline Icon: https://streamlinehq.com