From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers (2107.07999v8)
Abstract: In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way. We show that recent results on linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention (Luo et al., 2021) are special cases of this general mechanism. However by casting the problem as a topological (graph-based) modulation of unmasked attention, we obtain several results unknown before, including efficient d-dimensional RPE-masking and graph-kernel masking. We leverage many mathematical techniques ranging from spectral analysis through dynamic programming and random walks to new algorithms for solving Markov processes on graphs. We provide a corresponding empirical evaluation.
- Krzysztof Choromanski (96 papers)
- Han Lin (53 papers)
- Haoxian Chen (15 papers)
- Tianyi Zhang (262 papers)
- Arijit Sehanobish (20 papers)
- Valerii Likhosherstov (25 papers)
- Jack Parker-Holder (47 papers)
- Tamas Sarlos (40 papers)
- Adrian Weller (150 papers)
- Thomas Weingarten (2 papers)