Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RMT: Retentive Networks Meet Vision Transformers (2309.11523v5)

Published 20 Sep 2023 in cs.CV

Abstract: Vision Transformer (ViT) has gained increasing attention in the computer vision community in recent years. However, the core component of ViT, Self-Attention, lacks explicit spatial priors and bears a quadratic computational complexity, thereby constraining the applicability of ViT. To alleviate these issues, we draw inspiration from the recent Retentive Network (RetNet) in the field of NLP, and propose RMT, a strong vision backbone with explicit spatial prior for general purposes. Specifically, we extend the RetNet's temporal decay mechanism to the spatial domain, and propose a spatial decay matrix based on the Manhattan distance to introduce the explicit spatial prior to Self-Attention. Additionally, an attention decomposition form that adeptly adapts to explicit spatial prior is proposed, aiming to reduce the computational burden of modeling global information without disrupting the spatial decay matrix. Based on the spatial decay matrix and the attention decomposition form, we can flexibly integrate explicit spatial prior into the vision backbone with linear complexity. Extensive experiments demonstrate that RMT exhibits exceptional performance across various vision tasks. Specifically, without extra training data, RMT achieves 84.8% and 86.1% top-1 acc on ImageNet-1k with 27M/4.5GFLOPs and 96M/18.2GFLOPs. For downstream tasks, RMT achieves 54.5 box AP and 47.2 mask AP on the COCO detection task, and 52.8 mIoU on the ADE20K semantic segmentation task. Code is available at https://github.com/qhfan/RMT

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. Learned queries for efficient local attention. In CVPR, 2022.
  2. Cascade r-cnn: Delving into high quality object detection. In CVPR, 2018.
  3. RegionViT: Regional-to-Local Attention for Vision Transformers. In ICLR, 2022.
  4. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
  5. Twins: Revisiting the design of spatial attention in vision transformers. In NeurIPS, 2021.
  6. Conditional positional encodings for vision transformers. In ICLR, 2023.
  7. MMSegmentation Contributors. Mmsegmentation, an open source semantic segmentation toolbox, 2020.
  8. Randaugment: Practical automated data augmentation with a reduced search space. In CVPRW, 2020.
  9. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  10. Davit: Dual attention vision transformers. In ECCV, 2022.
  11. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In CVPR, 2022.
  12. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  13. Rethinking local perception in lightweight vision transformer, 2023.
  14. Doubly-fused vit: Fuse information from vision transformer doubly with local representation. In ECCV, 2022.
  15. SG-Former: Self guided Transformer with Evolving Token Reallocation. Sucheng ren, xingyi yang, songhua liu, xinchao wang. In ICCV, 2023.
  16. Cmt: Convolutional neural networks meet vision transformers. In CVPR, 2022a.
  17. Visual attention network. arXiv preprint arXiv:2202.09741, 2022b.
  18. Transformer in transformer. In NeurIPS, 2021.
  19. Neighborhood attention transformer. In CVPR, 2023.
  20. Global context vision transformers. In ICML, 2023.
  21. Deep residual learning for image recognition. In CVPR, 2016.
  22. Mask r-cnn. In ICCV, 2017.
  23. Conv2former: A simple transformer-style convnet for visual recognition. arXiv preprint arXiv:2211.11943, 2022.
  24. Deep networks with stochastic depth. In ECCV, 2016.
  25. Orthogonal transformer: An efficient vision transformer backbone with token orthogonalization. In NeurIPS, 2022.
  26. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021.
  27. All tokens matter: Token labeling for training better vision transformers. In NeurIPS, 2021.
  28. Panoptic feature pyramid networks. In CVPR, 2019.
  29. Mpvit: Multi-path vision transformer for dense prediction. In CVPR, 2022.
  30. Uniformer: Unified transformer for efficient spatiotemporal representation learning, 2022a.
  31. Mvitv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022b.
  32. Focal loss for dense object detection. In ICCV, 2017.
  33. Microsoft coco: Common objects in context. In ECCV, 2014.
  34. Scale-aware modulation meet transformer. In ICCV, 2023.
  35. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021.
  36. A convnet for the 2020s. In CVPR, 2022.
  37. Unified-io: A unified model for vision, language, and multi-modal tasks. In ICLR, 2023.
  38. Edgevits: Competing light-weight cnns on mobile devices with vision transformers. In ECCV, 2022a.
  39. Fast vision transformers with hilo attention. In NeurIPS, 2022b.
  40. Acceleration of stochastic approximation by averaging. arXiv preprint arXiv:1906.07155, 2019.
  41. Train short, test long: Attention with linear biases enables input length extrapolation. In ICLR, 2022.
  42. Learning transferable visual models from natural language supervision. In ICML, 2021.
  43. Hornet: Efficient high-order spatial interactions with recursive gated convolutions. In NeurIPS, 2022.
  44. Shunted self-attention via multi-scale token aggregation. In CVPR, 2022.
  45. Inception transformer. In NeurIPS, 2022.
  46. Retentive network: A successor to Transformer for large language models. ArXiv, abs/2307.08621, 2023.
  47. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019.
  48. Quadtree attention for vision transformers. In ICLR, 2022.
  49. Training data-efficient image transformers & distillation through attention. In ICML, 2021a.
  50. Going deeper with image transformers. In ICCV, 2021b.
  51. Maxvit: Multi-axis vision transformer. In ECCV, 2022.
  52. Attention is all you need. In NeurIPS, 2017.
  53. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In ICCV, 2021.
  54. Pvtv2: Improved baselines with pyramid vision transformer. Computational Visual Media, 8(3):1–10, 2022a.
  55. Crossformer: A versatile vision transformer hinging on cross-scale attention. In ICLR, 2022b.
  56. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In CVPR, 2023.
  57. Cvt: Introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808, 2021.
  58. Vision transformer with deformable attention. In CVPR, 2022.
  59. Unified perceptual parsing for scene understanding. In ECCV, 2018.
  60. mplug-2: A modularized multi-modal foundation model across text, image and video. In ICML, 2023.
  61. Lite vision transformer with enhanced self-attention. In CVPR, 2022a.
  62. Moat: Alternating mobile convolution and attention brings strong vision models. In ICLR, 2023.
  63. Focal self-attention for local-global interactions in vision transformers. In NeurIPS, 2021.
  64. Focal modulation networks. In NeurIPS, 2022b.
  65. Scalablevit: Rethinking the context-oriented generalization of vision transformer. In ECCV, 2022c.
  66. Wave-vit: Unifying wavelet and transformers for visual representation learning. In Proceedings of the European conference on computer vision (ECCV), 2022.
  67. Dual vision transformer. TPAMI, 2023.
  68. Volo: Vision outlooker for visual recognition. TPAMI, 2022.
  69. Cutmix: Regularization strategy to train strong classifiers with localizable features. In ICCV, 2019.
  70. mixup: Beyond empirical risk minimization. In ICLR, 2018.
  71. Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In ICCV, 2021.
  72. Vsa: Learning varied-size window attention in vision transformers. In ECCV, 2022.
  73. Random erasing data augmentation. In AAAI, 2020.
  74. Scene parsing through ade20k dataset. In CVPR, 2017.
  75. Biformer: Vision transformer with bi-level routing attention. In CVPR, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qihang Fan (13 papers)
  2. Huaibo Huang (58 papers)
  3. Mingrui Chen (15 papers)
  4. Hongmin Liu (8 papers)
  5. Ran He (172 papers)
Citations (42)

Summary

Exploring RMT: Integrating Retentive Networks and Vision Transformers

The paper "RMT: Retentive Networks Meet Vision Transformers" presents a novel approach to enhancing Vision Transformers (ViTs) by addressing some of their inherent limitations. The core proposition of the paper is the development of a vision backbone termed RMT, which draws inspiration from Retentive Networks (RetNet) typically used in NLP. The proposed RMT network introduces explicit spatial priors into the self-attention mechanism of ViTs while improving computational efficiency.

Motivation and Challenges

ViTs have emerged as a potent architecture in computer vision, but they encounter specific challenges. Self-attention, a pivotal component of ViTs, lacks inherent spatial priors, and its quadratic complexity imposes high computational burdens. Previous efforts to alleviate these issues have yielded some success; however, many of these solutions introduce their own set of complications. The authors address these challenges by proposing an architecture that leverages the temporal decay mechanisms from RetNet to incorporate spatial decay in vision transformers.

Methodological Innovations

The authors extend the concept of RetNet’s temporal decay to two dimensions by developing a spatial decay matrix, utilizing Manhattan distance to introduce spatial priors into self-attention mechanisms. This innovative approach, termed Manhattan Self-Attention (MaSA), facilitates a bidirectional and two-dimensional decay that respects the spatial geometry of image data. By doing so, the RMT network effectively models global information through self-attention, reducing computational costs to linear complexity without sacrificing the spatial integrity of the model.

A significant advancement in the RMT architecture is the ability to decompose self-attention along image axes without violating the spatial decay matrix structure. This decomposition maintains the network's spatial priors while enhancing computational efficiency.

Experimental Results

RMT demonstrates impressive performance on a wide range of vision tasks, including image classification on ImageNet-1k, object detection and instance segmentation on COCO 2017, and semantic segmentation on ADE20K. Notably, RMT achieves 84.8% top-1 accuracy on ImageNet-1k with only 27M parameters and 4.5 GFLOPs, outperforming numerous state-of-the-art models in the same computational budget. Furthermore, in downstream tasks like COCO detection and ADE20K semantic segmentation, RMT shows substantial improvements, achieving 54.5 box AP and 52.8 mIoU, respectively.

Theoretical and Practical Implications

The introduction of spatial decay explicitly tied to self-attention represents a considerable theoretical advancement in the domain of vision transformers. By enforcing spatial priors and streamlining computational processes, RMT potentially sets a new benchmark for efficient and effective deep learning in computer vision. Practically, RMT's architecture can lead to more computationally feasible deployment scenarios, particularly where efficiency and resource constraints are critical.

Future Directions

This research opens several avenues for further investigation. Future work could explore the application of spatial decay in more complex or versatile models, extending beyond the traditional paradigms of vision, such as multi-modal learning frameworks. Additionally, there are potential explorations into optimizing the spatial decay matrix or its hyperparameters for diverse datasets, potentially improving performance and adaptability across varied tasks.

In conclusion, this paper's proposal of RMT represents a significant step forward in the development of vision transformers. By integrating spatial priors and reducing complexity, it builds a robust foundation for future innovations in AI and machine learning applications.

Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com