Refiner: Refining Self-attention for Vision Transformers (2106.03714v1)
Abstract: Vision Transformers (ViTs) have shown competitive accuracy in image classification tasks compared with CNNs. Yet, they generally require much more data for model pre-training. Most of recent works thus are dedicated to designing more complex architectures or training methods to address the data-efficiency issue of ViTs. However, few of them explore improving the self-attention mechanism, a key factor distinguishing ViTs from CNNs. Different from existing works, we introduce a conceptually simple scheme, called refiner, to directly refine the self-attention maps of ViTs. Specifically, refiner explores attention expansion that projects the multi-head attention maps to a higher-dimensional space to promote their diversity. Further, refiner applies convolutions to augment local patterns of the attention maps, which we show is equivalent to a distributed local attention features are aggregated locally with learnable kernels and then globally aggregated with self-attention. Extensive experiments demonstrate that refiner works surprisingly well. Significantly, it enables ViTs to achieve 86% top-1 classification accuracy on ImageNet with only 81M parameters.
- Daquan Zhou (47 papers)
- Yujun Shi (23 papers)
- Bingyi Kang (39 papers)
- Weihao Yu (36 papers)
- Zihang Jiang (28 papers)
- Yuan Li (392 papers)
- Xiaojie Jin (50 papers)
- Qibin Hou (81 papers)
- Jiashi Feng (295 papers)