Linearized Relative Positional Encoding (2307.09270v1)
Abstract: Relative positional encoding is widely used in vanilla and linear transformers to represent positional information. However, existing encoding methods of a vanilla transformer are not always directly applicable to a linear transformer, because the latter requires a decomposition of the query and key representations into separate kernel functions. Nevertheless, principles for designing encoding methods suitable for linear transformers remain understudied. In this work, we put together a variety of existing linear relative positional encoding approaches under a canonical form and further propose a family of linear relative positional encoding algorithms via unitary transformation. Our formulation leads to a principled framework that can be used to develop new relative positional encoding methods that preserve linear space-time complexity. Equipped with different models, the proposed linearized relative positional encoding (LRPE) family derives effective encoding for various applications. Experiments show that compared with existing methods, LRPE achieves state-of-the-art performance in LLMing, text classification, and image classification. Meanwhile, it emphasizes a general paradigm for designing broadly more relative positional encoding methods that are applicable to linear transformers. The code is available at https://github.com/OpenNLPLab/Lrpe.
- Zhen Qin (105 papers)
- Weixuan Sun (31 papers)
- Kaiyue Lu (4 papers)
- Hui Deng (133 papers)
- Dongxu Li (40 papers)
- Xiaodong Han (19 papers)
- Yuchao Dai (123 papers)
- Lingpeng Kong (134 papers)
- Yiran Zhong (75 papers)