Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rethinking Patch Dependence for Masked Autoencoders (2401.14391v1)

Published 25 Jan 2024 in cs.CV

Abstract: In this work, we re-examine inter-patch dependencies in the decoding mechanism of masked autoencoders (MAE). We decompose this decoding mechanism for masked patch reconstruction in MAE into self-attention and cross-attention. Our investigations suggest that self-attention between mask patches is not essential for learning good representations. To this end, we propose a novel pretraining framework: Cross-Attention Masked Autoencoders (CrossMAE). CrossMAE's decoder leverages only cross-attention between masked and visible tokens, with no degradation in downstream performance. This design also enables decoding only a small subset of mask tokens, boosting efficiency. Furthermore, each decoder block can now leverage different encoder features, resulting in improved representation learning. CrossMAE matches MAE in performance with 2.5 to 3.7$\times$ less decoding compute. It also surpasses MAE on ImageNet classification and COCO instance segmentation under the same compute. Code and models: https://crossmae.github.io

Introduction

Masked Autoencoders (MAE) have gained prominence in unsupervised learning for computer vision, offering efficient pre-training of large-scale models. These models traditionally employ multi-headed self-attention throughout, wherein both visible and masked tokens exchange information. However, recent empirical evidence raises questions about the necessity of this self-attention paradigm, specifically the role of self-attention among the masked patches. CrossMAE, introduced in the reviewed paper, critically analyses and restructures the decoding mechanism of MAE, with a focus on efficiency and representation quality.

Analysis of Mask Tokens

The core inquiry starts with the attention mechanism applied in the decoder of MAE, distinguishing between self-attention among masked tokens and cross-attention where masked tokens attend to visible ones. CrossMAE's preliminary results indicate that masked tokens disproportionately attend to visible tokens rather than other masked tokens, suggesting that the self-attention among masked patches may not contribute significantly to the quality of the learned representation. This observation raises important questions: is self-attention among mask tokens necessary for effective representation learning, and can decoders be designed to reconstruct only a partial set of masked patches without diminishing performance?

CrossMAE Design

CrossMAE proposes a novel framework leveraging only cross-attention for masked patch reconstruction, removing the necessity for masked tokens to attend to each other. This adjustment significantly reduces computational resources in the decoding process, while empirically no decrease in downstream task performance is observed. In addition to restricting attention, CrossMAE introduces partial reconstruction, which allows independent decoding of each masked token, creating the option to reconstruct a subset of masked patches. Moreover, CrossMAE's decoder operates dynamically, utilizing different features from various encoder blocks for each decoding block, which contrasts MAE's static usage of the last encoder feature map. These combined strategies lead to substantial improvements in decoding efficiency and suggest an enhanced capacity for representation learning.

Empirical Validation and Efficiency Gains

The CrossMAE framework, with its evolved architecture, undergoes rigorous empirical validation. It demonstrates its capability to match or surpass MAE in performance metrics while simultaneously achieving a reduction in decoding computation by a factor of 2.5 to 3.7. On the ImageNet classification and COCO instance segmentation tasks, CrossMAE shows superior performance under identical computational constraints. Additionally, models trained under the CrossMAE framework scale favorably, suggesting that the disentangled design has implications for further scalability improvements in representation learning tasks.

Conclusion

The CrossMAE paper presents compelling evidence that the decoding process of masked autoencoders can be made more computationally efficient without compromising the representational abilities of the model. The findings encourage revisiting the role of self-attention in vision pre-training. The improved design lends itself to scaling across longer input sequences, therefore expanding the field of tasks and datasets that can be effectively processed. With CrossMAE, we step into an era where efficient pre-training of visual representations is not only desirable but eminently achievable.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (62)
  1. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  2. Multimae: Multi-modal multi-task masked autoencoders. arXiv:2204.01678, 2022a.
  3. Multimae: Multi-modal multi-task masked autoencoders. In European Conference on Computer Vision, pages 348–367. Springer, 2022b.
  4. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
  5. Beit: Bert pre-training of image transformers. In ICLR, 2022.
  6. Language models are few-shot learners. 2020.
  7. End-to-end object detection with transformers. In European conference on computer vision, pages 213–229. Springer, 2020.
  8. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pages 132–149, 2018.
  9. Unsupervised learning of visual features by contrasting cluster assignments. Advances in neural information processing systems, 33:9912–9924, 2020.
  10. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021.
  11. Mixed autoencoder for self-supervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22742–22751, 2023.
  12. Generative pretraining from pixels. 2020a.
  13. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020b.
  14. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020c.
  15. An empirical study of training self-supervised vision transformers. arXiv preprint arXiv:2104.02057, 2021.
  16. Per-pixel classification is not all you need for semantic segmentation. 2021.
  17. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1290–1299, 2022.
  18. Randaugment: Practical automated data augmentation with a reduced search space. arxiv e-prints, page. arXiv preprint arXiv:1909.13719, 4, 2019.
  19. Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023.
  20. Bert: Pre-training of deep bidirectional transformers for language understanding. 2019.
  21. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2020.
  22. Corrupted image modeling for self-supervised visual pre-training. In The Eleventh International Conference on Learning Representations, 2023.
  23. Masked autoencoders as spatiotemporal learners. In Advances in Neural Information Processing Systems, 2022.
  24. Multimodal masked autoencoders learn transferable representations. arXiv preprint arXiv:2205.14204, 2022.
  25. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv:1706.02677, 2017a.
  26. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017b.
  27. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020.
  28. Pixelvae: A latent variable model for natural images. arXiv preprint arXiv:1611.05013, 2016.
  29. Siamese masked autoencoders. arXiv preprint arXiv:2305.14344, 2023.
  30. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020.
  31. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16000–16009, 2022.
  32. Deep networks with stochastic depth. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pages 646–661. Springer, 2016.
  33. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795, 2021.
  34. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4015–4026, 2023.
  35. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2020.
  36. Autoencoding beyond pixels using a learned similarity metric. In International conference on machine learning, pages 1558–1566. PMLR, 2016.
  37. Progressively compressed auto-encoder for self-supervised representation learning. In The Eleventh International Conference on Learning Representations, 2023a.
  38. Mage: Masked generative encoder to unify representation learning and image synthesis. arXiv preprint arXiv:2211.09117, 2022a.
  39. Exploring plain vision transformer backbones for object detection. In European Conference on Computer Vision, pages 280–296. Springer, 2022b.
  40. Scaling language-image pre-training via masking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23390–23400, 2023b.
  41. Mixmae: Mixed and masked autoencoder for efficient pretraining of hierarchical vision transformers. arXiv:2205.13137, 2022.
  42. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
  43. Sgdr: Stochastic gradient descent with warm restarts. 2017a.
  44. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017b.
  45. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
  46. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536–2544, 2016.
  47. Improving language understanding by generative pre-training. 2018.
  48. Language models are unsupervised multitask learners. 2019.
  49. Real-world robot learning with masked visual pre-training. In Conference on Robot Learning, pages 416–426. PMLR, 2023.
  50. How to train your vit? data, augmentation, and regularization in vision transformers. Transactions on Machine Learning Research, 2022.
  51. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
  52. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In Advances in Neural Information Processing Systems, 2022.
  53. Augmenting convolutional networks with attention-based aggregation, 2021.
  54. Pixeltransformer: Sample conditioned signal generation. In Proceedings of the 38th International Conference on Machine Learning, pages 10455–10464. PMLR, 2021.
  55. Conditional image generation with pixelcnn decoders. Advances in neural information processing systems, 29, 2016.
  56. Attention is all you need. 2017.
  57. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(12), 2010.
  58. Unsupervised feature learning by cross-level instance-group discrimination. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12586–12595, 2021.
  59. Diffusion models as masked autoencoder. In ICCV, 2023.
  60. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9653–9663, 2022.
  61. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6023–6032, 2019.
  62. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Letian Fu (13 papers)
  2. Long Lian (16 papers)
  3. Renhao Wang (14 papers)
  4. Baifeng Shi (17 papers)
  5. Xudong Wang (113 papers)
  6. Adam Yala (13 papers)
  7. Trevor Darrell (324 papers)
  8. Ken Goldberg (162 papers)
  9. Alexei A. Efros (100 papers)
Citations (9)
Youtube Logo Streamline Icon: https://streamlinehq.com