Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers (2107.07999v8)

Published 16 Jul 2021 in cs.LG and cs.AI

Abstract: In this paper we provide, to the best of our knowledge, the first comprehensive approach for incorporating various masking mechanisms into Transformers architectures in a scalable way. We show that recent results on linear causal attention (Choromanski et al., 2021) and log-linear RPE-attention (Luo et al., 2021) are special cases of this general mechanism. However by casting the problem as a topological (graph-based) modulation of unmasked attention, we obtain several results unknown before, including efficient d-dimensional RPE-masking and graph-kernel masking. We leverage many mathematical techniques ranging from spectral analysis through dynamic programming and random walks to new algorithms for solving Markov processes on graphs. We provide a corresponding empirical evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Krzysztof Choromanski (96 papers)
  2. Han Lin (53 papers)
  3. Haoxian Chen (15 papers)
  4. Tianyi Zhang (262 papers)
  5. Arijit Sehanobish (20 papers)
  6. Valerii Likhosherstov (25 papers)
  7. Jack Parker-Holder (47 papers)
  8. Tamas Sarlos (40 papers)
  9. Adrian Weller (150 papers)
  10. Thomas Weingarten (2 papers)
Citations (28)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub