RMT: Retentive Networks Meet Vision Transformers (2309.11523v5)
Abstract: Vision Transformer (ViT) has gained increasing attention in the computer vision community in recent years. However, the core component of ViT, Self-Attention, lacks explicit spatial priors and bears a quadratic computational complexity, thereby constraining the applicability of ViT. To alleviate these issues, we draw inspiration from the recent Retentive Network (RetNet) in the field of NLP, and propose RMT, a strong vision backbone with explicit spatial prior for general purposes. Specifically, we extend the RetNet's temporal decay mechanism to the spatial domain, and propose a spatial decay matrix based on the Manhattan distance to introduce the explicit spatial prior to Self-Attention. Additionally, an attention decomposition form that adeptly adapts to explicit spatial prior is proposed, aiming to reduce the computational burden of modeling global information without disrupting the spatial decay matrix. Based on the spatial decay matrix and the attention decomposition form, we can flexibly integrate explicit spatial prior into the vision backbone with linear complexity. Extensive experiments demonstrate that RMT exhibits exceptional performance across various vision tasks. Specifically, without extra training data, RMT achieves 84.8% and 86.1% top-1 acc on ImageNet-1k with 27M/4.5GFLOPs and 96M/18.2GFLOPs. For downstream tasks, RMT achieves 54.5 box AP and 47.2 mask AP on the COCO detection task, and 52.8 mIoU on the ADE20K semantic segmentation task. Code is available at https://github.com/qhfan/RMT
- Learned queries for efficient local attention. In CVPR, 2022.
- Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018.
- RegionViT: Regional-to-Local Attention for Vision Transformers. In ICLR, 2022.
- MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
- Twins: Revisiting the design of spatial attention in vision transformers. In NeurIPS, 2021.
- Conditional positional encodings for vision transformers. In ICLR, 2023.
- MMSegmentation Contributors. Mmsegmentation, an open source semantic segmentation toolbox, 2020.
- Randaugment: Practical automated data augmentation with a reduced search space. In CVPRW, 2020.
- Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
- Davit: Dual attention vision transformers. In ECCV, 2022.
- Cswin transformer: A general vision transformer backbone with cross-shaped windows. In CVPR, 2022.
- An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
- Rethinking local perception in lightweight vision transformer, 2023.
- Doubly-fused vit: Fuse information from vision transformer doubly with local representation. In ECCV, 2022.
- SG-Former: Self guided Transformer with Evolving Token Reallocation. Sucheng ren, xingyi yang, songhua liu, xinchao wang. In ICCV, 2023.
- Cmt: Convolutional neural networks meet vision transformers. In CVPR, 2022a.
- Visual attention network. arXiv preprint arXiv:2202.09741, 2022b.
- Transformer in transformer. In NeurIPS, 2021.
- Neighborhood attention transformer. In CVPR, 2023.
- Global context vision transformers. In ICML, 2023.
- Deep residual learning for image recognition. In CVPR, 2016.
- Mask r-cnn. In ICCV, 2017.
- Conv2former: A simple transformer-style convnet for visual recognition. arXiv preprint arXiv:2211.11943, 2022.
- Deep networks with stochastic depth. In ECCV, 2016.
- Orthogonal transformer: An efficient vision transformer backbone with token orthogonalization. In NeurIPS, 2022.
- Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021.
- All tokens matter: Token labeling for training better vision transformers. In NeurIPS, 2021.
- Panoptic feature pyramid networks. In CVPR, 2019.
- Mpvit: Multi-path vision transformer for dense prediction. In CVPR, 2022.
- Uniformer: Unified transformer for efficient spatiotemporal representation learning, 2022a.
- Mvitv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022b.
- Focal loss for dense object detection. In ICCV, 2017.
- Microsoft coco: Common objects in context. In ECCV, 2014.
- Scale-aware modulation meet transformer. In ICCV, 2023.
- Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021.
- A convnet for the 2020s. In CVPR, 2022.
- Unified-io: A unified model for vision, language, and multi-modal tasks. In ICLR, 2023.
- Edgevits: Competing light-weight cnns on mobile devices with vision transformers. In ECCV, 2022a.
- Fast vision transformers with hilo attention. In NeurIPS, 2022b.
- Acceleration of stochastic approximation by averaging. arXiv preprint arXiv:1906.07155, 2019.
- Train short, test long: Attention with linear biases enables input length extrapolation. In ICLR, 2022.
- Learning transferable visual models from natural language supervision. In ICML, 2021.
- Hornet: Efficient high-order spatial interactions with recursive gated convolutions. In NeurIPS, 2022.
- Shunted self-attention via multi-scale token aggregation. In CVPR, 2022.
- Inception transformer. In NeurIPS, 2022.
- Retentive network: A successor to Transformer for large language models. ArXiv, abs/2307.08621, 2023.
- Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019.
- Quadtree attention for vision transformers. In ICLR, 2022.
- Training data-efficient image transformers & distillation through attention. In ICML, 2021a.
- Going deeper with image transformers. In ICCV, 2021b.
- Maxvit: Multi-axis vision transformer. In ECCV, 2022.
- Attention is all you need. In NeurIPS, 2017.
- Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021.
- Pvtv2: Improved baselines with pyramid vision transformer. Computational Visual Media, 8(3):1–10, 2022a.
- Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022b.
- Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, 2023.
- Cvt: Introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808, 2021.
- Vision transformer with deformable attention. In CVPR, 2022.
- Unified perceptual parsing for scene understanding. In ECCV, 2018.
- mplug-2: A modularized multi-modal foundation model across text, image and video. In ICML, 2023.
- Lite vision transformer with enhanced self-attention. In CVPR, 2022a.
- Moat: Alternating mobile convolution and attention brings strong vision models. In ICLR, 2023.
- Focal self-attention for local-global interactions in vision transformers. In NeurIPS, 2021.
- Focal modulation networks. In NeurIPS, 2022b.
- Scalablevit: Rethinking the context-oriented generalization of vision transformer. In ECCV, 2022c.
- Wave-vit: Unifying wavelet and transformers for visual representation learning. In Proceedings of the European conference on computer vision (ECCV), 2022.
- Dual vision transformer. TPAMI, 2023.
- Volo: Vision outlooker for visual recognition. TPAMI, 2022.
- Cutmix: Regularization strategy to train strong classifiers with localizable features. In ICCV, 2019.
- mixup: Beyond empirical risk minimization. In ICLR, 2018.
- Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In ICCV, 2021.
- Vsa: Learning varied-size window attention in vision transformers. In ECCV, 2022.
- Random erasing data augmentation. In AAAI, 2020.
- Scene parsing through ade20k dataset. In CVPR, 2017.
- Biformer: Vision transformer with bi-level routing attention. In CVPR, 2023.
- Qihang Fan (13 papers)
- Huaibo Huang (58 papers)
- Mingrui Chen (15 papers)
- Hongmin Liu (8 papers)
- Ran He (172 papers)