Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AxWin Transformer: A Context-Aware Vision Transformer Backbone with Axial Windows (2305.01280v1)

Published 2 May 2023 in cs.CV

Abstract: Recently Transformer has shown good performance in several vision tasks due to its powerful modeling capabilities. To reduce the quadratic complexity caused by the attention, some outstanding work restricts attention to local regions or extends axial interactions. However, these methos often lack the interaction of local and global information, balancing coarse and fine-grained information. To address this problem, we propose AxWin Attention, which models context information in both local windows and axial views. Based on the AxWin Attention, we develop a context-aware vision transformer backbone, named AxWin Transformer, which outperforming the state-of-the-art methods in both classification and downstream segmentation and detection tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Fangjian Lin (7 papers)
  2. Yizhe Ma (3 papers)
  3. Sitong Wu (20 papers)
  4. Long Yu (31 papers)
  5. Shengwei Tian (12 papers)
Citations (3)