Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vision Backbone Enhancement via Multi-Stage Cross-Scale Attention (2308.05872v2)

Published 10 Aug 2023 in cs.CV

Abstract: Convolutional neural networks (CNNs) and vision transformers (ViTs) have achieved remarkable success in various vision tasks. However, many architectures do not consider interactions between feature maps from different stages and scales, which may limit their performance. In this work, we propose a simple add-on attention module to overcome these limitations via multi-stage and cross-scale interactions. Specifically, the proposed Multi-Stage Cross-Scale Attention (MSCSA) module takes feature maps from different stages to enable multi-stage interactions and achieves cross-scale interactions by computing self-attention at different scales based on the multi-stage feature maps. Our experiments on several downstream tasks show that MSCSA provides a significant performance boost with modest additional FLOPs and runtime.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Liang Shang (6 papers)
  2. Yanli Liu (21 papers)
  3. Zhengyang Lou (3 papers)
  4. Shuxue Quan (5 papers)
  5. Nagesh Adluru (8 papers)
  6. Bochen Guan (10 papers)
  7. William A. Sethares (7 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.