Softmax-free Linear Transformers
Abstract: Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks. The self-attention mechanism underpinning the strength of ViTs has a quadratic complexity in both computation and memory usage. This motivates the development of approximating the self-attention at linear complexity. However, an in-depth analysis in this work reveals that existing methods are either theoretically flawed or empirically ineffective for visual recognition. We identify that their limitations are rooted in the inheritance of softmax-based self-attention during approximations, that is, normalizing the scaled dot-product between token feature vectors using the softmax function. As preserving the softmax operation challenges any subsequent linearization efforts. By this insight, a family of Softmax-Free Transformers (SOFT) are proposed. Specifically, a Gaussian kernel function is adopted to replace the dot-product similarity, enabling a full self-attention matrix to be approximated under low-rank matrix decomposition. For computational robustness, we estimate the Moore-Penrose inverse using an iterative Newton-Raphson method in the forward process only, while calculating its theoretical gradients only once in the backward process. To further expand applicability (e.g., dense prediction tasks), an efficient symmetric normalization technique is introduced. Extensive experiments on ImageNet, COCO, and ADE20K show that our SOFT significantly improves the computational efficiency of existing ViT variants. With linear complexity, much longer token sequences are permitted by SOFT, resulting in superior trade-off between accuracy and complexity. Code and models are available at https://github.com/fudan-zvg/SOFT.
- Bello I (2021) Lambdanetworks: Modeling long-range interactions without attention. In: International Conference on Learning Representations
- Ben-Israel A, Cohen D (1966) On iterative computation of generalized inverses and associated projections. SIAM Journal on Numerical Analysis 3(3):410–419
- Fasshauer GE (2011) Positive definite kernels: past, present and future. Dolomites Research Notes on Approximation 4:21–63
- Krizhevsky A (2009) Learning multiple layers of features from tiny images. URL https://api.semanticscholar.org/CorpusID:18268744
- Mindspore (2020) https://www.mindspore.cn/
- Von Luxburg U (2007) A tutorial on spectral clustering. Statistics and computing 17:395–416
- Wightman R (2019) Pytorch image models. https://github.com/rwightman/pytorch-image-models
- Williams C, Seeger M (2000) Using the nyström method to speed up kernel machines. Advances in Neural Information Processing Systems 13
- Yoshida Y, Miyato T (2017) Spectral norm regularization for improving the generalizability of deep learning. arXiv preprint arXiv:170510941
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.