Papers
Topics
Authors
Recent
2000 character limit reached

LSH Attention for Scalable Transformers

Updated 19 December 2025
  • LSH Attention is a technique that employs randomized hashing to restrict attention to high-similarity query-key pairs, reducing dense O(N²) complexity to O(N log N) or better.
  • It approximates dot-product attention by grouping vectors into collision bins based on hash codes, enabling efficient sparse computations across various domains.
  • Empirical results show significant efficiency gains in NLP, vision, and geometric processing, though careful parameter tuning is essential to optimize performance.

Locality-Sensitive Hashing (LSH) Attention refers to a family of Transformer-compatible approximations that use randomized hashing functions to identify and restrict attention to high-similarity pairs in the query-key space. Rather than computing the dense O(N2)O(N^2) full attention matrix, LSH-based methods employ structured probabilistic sparsification to achieve complexity reductions, often to O(NlogN)O(N \log N) or better, while maintaining competitive accuracy. This is accomplished through data-dependent grouping schemes where only pairs sharing hash codes or “colliding” bins participate in dot-product attention, underpinned by rigorous collision probability analysis. LSH-attention is realized in multiple domains: language modeling (BERT-LSH (Li et al., 12 Apr 2024), SMYRF-BERT (Daras et al., 2020), MagicPIG (Chen et al., 21 Oct 2024)), high-resolution vision (BigGAN (Daras et al., 2020)), and geometric processing of point clouds (LAHNet (Qu et al., 30 Nov 2025)). The technique leverages the sparsity and locality in high-dimensional representations and often yields both substantial efficiency gains and, in some cases, superior generalization compared to baselines.

1. Mathematical Foundations of LSH-Based Attention

The LSH approach to approximating scaled-dot-product attention centers on mapping each query qiq_i and key kjk_j vector through stochastic hash functions that are sensitive to their similarity:

Aij=exp(qikj/d)jexp(qikj/d)A_{ij} = \frac{\exp\left(q_i^\top k_j / \sqrt{d}\right)}{\sum_{j'} \exp\left(q_i^\top k_{j'} / \sqrt{d}\right)}

LSH restricts the computation of AijA_{ij} to those (i,j)(i, j) pairs that “collide” under rr random sign-projection hash bands (SimHash) (Li et al., 12 Apr 2024):

  • Sample rr independent Gaussian vectors g1,,grN(0,Id)\mathbf{g}_1, \dots, \mathbf{g}_r \sim \mathcal{N}(0, I_d).
  • For xRd\mathbf{x} \in \mathbb{R}^d, form the binary sign vector s(x)=[sign(g1x),,sign(grx)]\mathbf{s}(\mathbf{x}) = [\text{sign}(\mathbf{g}_1^\top \mathbf{x}), \dots, \text{sign}(\mathbf{g}_r^\top \mathbf{x})].
  • Apply folding and modular bucketization to map to mm buckets: h(x)=(i=1r1{si>0}Ci)modmh(\mathbf{x}) = \left(\sum_{i=1}^r \mathbf{1}_{\{s_i>0\}} C_i \right) \bmod m.
  • Use nn independent hash functions for recall.

The collision probability for a single band, with θ\theta the angle between vectors, is

Pr[sign(ga)=sign(gb)]=1θπ\Pr\left[\text{sign}(\mathbf{g}^\top \mathbf{a}) = \text{sign}(\mathbf{g}^\top \mathbf{b})\right] = 1 - \frac{\theta}{\pi}

and for rr bands, pband(a,b)=(1θπ)rp_{\text{band}}(\mathbf{a}, \mathbf{b}) = \left(1 - \frac{\theta}{\pi}\right)^r. Over nn hashes, the total collision probability is Pcollide=1(1p1)nP_{\text{collide}} = 1 - (1-p_1)^n (Li et al., 12 Apr 2024).

Algorithmically, the LSH mask CijC_{ij} flags active Q-K pairs; the attention softmax is applied only to colliding pairs. This enables a sparse attention matrix, sharply reducing arithmetic operations.

Asymmetric LSH transformations (ALSH), central to SMYRF, embed queries and keys in Rd+2\mathbb{R}^{d+2} such that maximum-inner-product search (MIPS) can be recast as Euclidean nearest-neighbors, facilitating efficient hashing and balanced clustering (Daras et al., 2020).

2. Algorithmic Structures and Practical Implementations

A generic LSH-Attention workflow consists of:

  1. Stacking QQ, KK (and sometimes VV) into unified arrays for efficient bucketization.
  2. Computing multiple independent hash assignments for each vector.
  3. Determining collision sets, forming a sparse attention mask.
  4. Computing only the necessary dot-products, normalizing via sparse softmax, and outputting the weighted sum.

In BERT-LSH, this results in the following steps per batch and head (Li et al., 12 Apr 2024):

  • Hash Q, K via several bands
  • Build collision mask CC
  • Populate sparse score matrix SS; apply row-wise softmax
  • Compute sparse output AVA\,V as usual

SMYRF extends the framework by:

  • Embedding each vector asymmetrically, so inner products are faithfully converted to 2\ell_2 proximities.
  • Hashing these embeddings and performing adaptive partitioning into LL clusters of the same size.
  • Masking cross-cluster Q-K pairs, enforcing balanced cluster-wise attention computation, which is fully block-dense within clusters but globally sparse (Daras et al., 2020).

MagicPIG introduces importance sampling for decoder-only LLMs, with proposal distributions developed from LSH collision theory and sampling implemented via hash-table lookup; the final output employs softmax reweighting with bias correction (Chen et al., 21 Oct 2024).

LAHNet applies LSH attention to point cloud registration, partitioning NN points in R3\mathbb{R}^3 into windows using randomized rotations and majority voting, enabling convolution-like local self-attention, efficient cross-window hopping, and overlap-aware matching via window-level pooling (Qu et al., 30 Nov 2025).

3. Computational Complexity and Approximation Guarantees

The baseline attention mechanisms incur O(N2d)O(N^2 d) complexity per layer, with NN the sequence or token count, and dd the embedding dimension. LSH-based attention achieves theoretical complexity reductions as follows:

Method Complexity Memory Cost
Dense O(N2d)O(N^2 d) O(N2)O(N^2)
LSH O(NlogNd)O(N \log N d) O(NlogN)O(N \log N)
Clustered O(NlogN+N)O(N \log N + N) O(N)O(N)
LAHNet O(N)O(N) (with MM,cc) O(NM)O(N M)

The hashing cost per band is O(Nd)O(N d); intra-bucket attention, assuming O(logN)O(\log N) average collision rate, is O(NlogNd)O(N \log N d) (Li et al., 12 Apr 2024). Approximations incur lossy aggregation: only Q-K pairs sharing hash buckets contribute to attention, with the 1\ell_1 error bounded for each query as

pipi11j:Cij=1pij\left\|p_{i\cdot} - p'_{i\cdot}\right\|_1 \leq 1 - \sum_{j:C_{ij}=1} p_{ij}

Proper tuning of hash depth (r,n,mr,n,m), cluster count (LL), and sampling parameters controls the balance of approximation error and efficiency.

Importance sampling in MagicPIG provides unbiased estimators with variance that declines with sample size; LSH-based proposals concentrate the sampling on high-similarity keys using collision probabilities proportional to cosine similarity (Chen et al., 21 Oct 2024).

4. Empirical Results and Observed Performance

Empirical comparisons consistently show substantial computational savings and competitive or superior accuracy. In BERT-LSH, FLOPs per forward pass drop by \sim60%; the average dot-product count decreases from 200 to 28.5 (Li et al., 12 Apr 2024). Pretraining and fine-tuning metrics demonstrate improved eval loss and perplexity:

Task Baseline BERT BERT-LSH
Eval loss (pretrain) 5.4955 5.3308
Eval acc (%) 15.05 18.19
SST-2 loss 0.5009 0.4948
SQuAD2.0 loss 0.5364 0.5312
Total train time faster slower (Python LSH overhead)

Similar trends hold in SMYRF, where SMYRF-BERT matches or exceeds baseline GLUE scores while halving memory usage (GLUE avg: 83.12 vs. 82.69 for 50% memory reduction) and supports inference speedups up to \sim50% on longer sequences (Daras et al., 2020). SMYRF-trained GANs scale self-attention to 65k tokens on single TPUs (CelebA-HQ).

MagicPIG achieves up to 5×5\times throughput improvement for LLM generation with 96k-token context, degrades accuracy by 2%\leq 2\% (2–5% of full-attention FLOPs), and supports scalable heterogeneous GPU+CPU inference pipelines (Chen et al., 21 Oct 2024).

LAHNet realizes O(N)O(N) complexity in point cloud processing, preserving meaningful long-range coupling while retaining robust registration accuracy even in outdoor, large-scale benchmarks (Qu et al., 30 Nov 2025).

5. Domain-Specific Extensions and Applications

LSH-attention has been deployed in multiple contexts:

  • NLP: BERT-LSH and SMYRF-BERT provide drop-in replacements for standard attention with no retraining required; results generalize to sequence classification, QA, and pretraining tasks (Li et al., 12 Apr 2024, Daras et al., 2020).
  • LLM Generation: MagicPIG addresses KV-cache bottlenecks in decoder-only models, enabling long context windows and batch scalability (Chen et al., 21 Oct 2024).
  • Vision: SMYRF enables scalable attention layers in BigGAN for high-res generation; LAHNet applies LSH-group partitioning for efficient local and cross-window feature interaction in point cloud registration (Daras et al., 2020, Qu et al., 30 Nov 2025).

A plausible implication is that locality-sensitive sparsification generalizes well to domains where strong pairwise locality or correlation in representation space is prevalent.

6. Implementation Considerations and Limitations

Current LSH-attention implementations face several practical issues:

  • GPU-level parallelism is typically underexploited due to Python/CPU-bound hash computations, though custom kernels are being developed (Li et al., 12 Apr 2024).
  • Approximation quality depends sensitively on hash depths, number of bands/tables, and cluster counts; poor tuning leads to false negatives (missed Q-K links) and degraded accuracy (Daras et al., 2020).
  • Early training stages in deep models and GANs can suffer from insufficient attention connectivity; dynamic adjustment of hash parameters or hybrid sparsity is needed (Daras et al., 2020).
  • LSH randomization requires controlled seeding for reproducibility.
  • Structured patterns (sliding window, polynomial kernels) may outperform LSH in domains with deterministic locality.

Integration into frameworks (PyTorch, TensorFlow, HuggingFace) requires custom attention kernels encompassing hashing, sorting, masking, and blockwise matmul (Daras et al., 2020, Chen et al., 21 Oct 2024).

7. Research Outlook and Potential Extensions

Future developments in LSH-based attention include:

A plausible implication is that LSH-attention presents a flexible unifying mechanism for scalable computation across dense and sparse-transformer variants, with demonstrated empirical benefits for both efficiency and generalization.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Locality-Sensitive Hashing (LSH) Attention.