Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MetaFormer Is Actually What You Need for Vision (2111.11418v3)

Published 22 Nov 2021 in cs.CV, cs.AI, and cs.LG

Abstract: Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in Transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the Transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in Transformers with an embarrassingly simple spatial pooling operator to conduct only basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned Vision Transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 50%/62% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from Transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent Transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design. Code is available at https://github.com/sail-sg/poolformer.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Weihao Yu (36 papers)
  2. Mi Luo (9 papers)
  3. Pan Zhou (220 papers)
  4. Chenyang Si (36 papers)
  5. Yichen Zhou (21 papers)
  6. Xinchao Wang (203 papers)
  7. Jiashi Feng (295 papers)
  8. Shuicheng Yan (275 papers)
Citations (739)

Summary

Analysis of MetaFormer and PoolFormer in Computer Vision

This paper, titled "MetaFormer Is Actually What You Need for Vision," investigates the underpinnings of the success seen in Transformer models, particularly focusing on their applications in the field of computer vision. The authors propose that the architectural framework termed "MetaFormer," rather than specific token mixers such as attention mechanisms, plays a pivotal role in achieving high performance across various vision tasks.

Overview

Transformers have gained substantial attention in computer vision, with many models attributing their success to the attention-based token mixers. However, the authors challenge this perspective, suggesting that the general architecture of the Transformer—abstracted in this paper as "MetaFormer"—is the more crucial component for competitive performance.

The MetaFormer framework encompasses all Transformer components except the specific token mixer, leaving this element flexible. This abstraction allows the researchers to replace the typical attention module with a simple spatial pooling operation to evaluate its effectiveness.

Key Contributions

  1. Introduction of MetaFormer: The paper presents MetaFormer as a generalized architecture that transforms the view of how Transformer's success is typically perceived. By decoupling the specific token mixers, MetaFormer broadens the understanding of what contributes to model efficacy.
  2. Development of PoolFormer: With the pooling operation as a token mixer, the authors introduce PoolFormer. Despite its simplicity—lacking any learnable parameters for the token mixer—PoolFormer outperformed well-established baselines, such as DeiT and ResMLP, underlining the hypothesis that MetaFormer architecture is fundamentally sufficient.
  3. Outstanding Performance: On the ImageNet-1K benchmark, PoolFormer achieved a top-1 accuracy of 82.1%, surpassing DeiT-B and ResMLP-B24 while utilizing significantly fewer parameters (35% and 52% reductions, respectively), as well as fewer MACs (50% and 62% reductions, respectively).

Implications and Future Research

The results of this paper suggest that emphasis should shift from enhancing token mixers to refining the MetaFormer architecture itself. PoolFormer can serve as a strong baseline for future architecture design, prompting further exploration into lightweight token mixing strategies that maintain model efficiency.

This paper lays a foundation for upcoming innovations in computer vision and potentially extends into NLP, suggesting MetaFormer might be pivotal in various domains beyond vision models. The authors encourage future work to leverage this architecture and refine it under different learning paradigms, such as self-supervised and transfer learning.

Technical Insights

The paper explores a variety of token mixers and normalizations, rigorously assessing their impacts through ablation studies. Importantly, even when using identity mappings or random matrices for token mixing, MetaFormer still attained reasonable performance. This supports the notion that the architectural framework, more than the mixer’s complexity, drives efficacy.

Additionally, the modified Layer Normalization, tailored for spatial and channel dimensions, was shown to improve performance over traditional normalization techniques by a substantial margin (0.7% to 0.8%).

Conclusion

This paper offers a compelling examination of Transformers in computer vision. By abstracting the architecture into MetaFormer, the researchers have demonstrated the potential for simpler models to achieve remarkable results. This insight may steer future research towards optimizing architectural aspects rather than focusing solely on complex token mixers. As AI continues to evolve, the methodologies introduced here may influence a wide range of applications across different domains.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com