Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$O(n)$ Connections are Expressive Enough: Universal Approximability of Sparse Transformers (2006.04862v2)

Published 8 Jun 2020 in cs.LG and stat.ML

Abstract: Recently, Transformer networks have redefined the state of the art in many NLP tasks. However, these models suffer from quadratic computational cost in the input sequence length $n$ to compute pairwise attention in each layer. This has prompted recent research into sparse Transformers that sparsify the connections in the attention layers. While empirically promising for long sequences, fundamental questions remain unanswered: Can sparse Transformers approximate any arbitrary sequence-to-sequence function, similar to their dense counterparts? How does the sparsity pattern and the sparsity level affect their performance? In this paper, we address these questions and provide a unifying framework that captures existing sparse attention models. We propose sufficient conditions under which we prove that a sparse attention model can universally approximate any sequence-to-sequence function. Surprisingly, our results show that sparse Transformers with only $O(n)$ connections per attention layer can approximate the same function class as the dense model with $n2$ connections. Lastly, we present experiments comparing different patterns/levels of sparsity on standard NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chulhee Yun (37 papers)
  2. Yin-Wen Chang (4 papers)
  3. Srinadh Bhojanapalli (44 papers)
  4. Ankit Singh Rawat (64 papers)
  5. Sashank J. Reddi (43 papers)
  6. Sanjiv Kumar (123 papers)
Citations (70)