Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized and Deterministic Attention Sparsification Algorithms for Over-parameterized Feature Dimension (2304.04397v1)

Published 10 Apr 2023 in cs.DS and cs.LG

Abstract: LLMs have shown their power in different areas. Attention computation, as an important subroutine of LLMs, has also attracted interests in theory. Recently the static computation and dynamic maintenance of attention matrix has been studied by [Alman and Song 2023] and [Brand, Song and Zhou 2023] from both algorithmic perspective and hardness perspective. In this work, we consider the sparsification of the attention problem. We make one simplification which is the logit matrix is symmetric. Let $n$ denote the length of sentence, let $d$ denote the embedding dimension. Given a matrix $X \in \mathbb{R}{n \times d}$, suppose $d \gg n$ and $| X X\top |{\infty} < r$ with $r \in (0,0.1)$, then we aim for finding $Y \in \mathbb{R}{n \times m}$ (where $m\ll d$) such that \begin{align*} | D(Y){-1} \exp( Y Y\top ) - D(X){-1} \exp( X X\top) |{\infty} \leq O(r) \end{align*} We provide two results for this problem. $\bullet$ Our first result is a randomized algorithm. It runs in $\widetilde{O}(\mathrm{nnz}(X) + n{\omega} ) $ time, has $1-\delta$ succeed probability, and chooses $m = O(n \log(n/\delta))$. Here $\mathrm{nnz}(X)$ denotes the number of non-zero entries in $X$. We use $\omega$ to denote the exponent of matrix multiplication. Currently $\omega \approx 2.373$. $\bullet$ Our second result is a deterministic algorithm. It runs in $\widetilde{O}(\min{\sum_{i\in[d]}\mathrm{nnz}(X_i)2, dn{\omega-1}} + n{\omega+1})$ time and chooses $m = O(n)$. Here $X_i$ denote the $i$-th column of matrix $X$. Our main findings have the following implication for applied LLMs task: for any super large feature dimension, we can reduce it down to the size nearly linear in length of sentence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yichuan Deng (21 papers)
  2. Sridhar Mahadevan (33 papers)
  3. Zhao Song (253 papers)
Citations (29)