Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 188 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 39 tok/s Pro
GPT-4o 78 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 446 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Approximate Nearest Neighbor Attention

Updated 14 September 2025
  • ANNA is an efficient attention mechanism that leverages approximate neighbor search to restrict attention computation, reducing the quadratic overhead in Transformers.
  • It employs locality-sensitive hashing and candidate key retrieval to achieve sub-quadratic runtime while preserving the model’s expressive power and multi-hop reasoning capabilities.
  • ANNA’s design aligns with Massively Parallel Computation paradigms, enabling scalable, distributed attention processing that supports advanced sequence modeling tasks.

Approximate Nearest Neighbor Attention (ANNA) refers to an efficient attention mechanism that leverages approximate nearest neighbor (ANN) retrieval within neural architectures, particularly Transformers, to mitigate the quadratic computation and memory demands of standard full attention. By restricting each query’s attention computation to its approximate nearest neighbors—determined via algorithms such as locality-sensitive hashing (LSH)—ANNA achieves sub-quadratic time complexity while retaining the expressive power necessary for complex sequence modeling and reasoning. The formal construction, as given in recent work, demonstrates that ANNA-transformers match the representational capabilities of standard transformers, can solve advanced multi-hop reasoning tasks, and unify a broad class of efficient attention mechanisms under a Massively Parallel Computation (MPC) framework (Liu et al., 10 Sep 2025).

1. Formal Definition and Mechanism

ANNA modifies the traditional attention operation by limiting the set of keys each query attends to, selecting only those within its approximate nearest neighbor set in embedding space. For a sequence of N tokens, with embeddings X ∈ ℝN×d, and functions Q, K mapping tokens to m-dimensional query and key spaces, ANNA computes:

ANNAQ,K,V(X)i=j=1Nwijvj\text{ANNA}_{Q,K,V}(X)_i = \sum_{j=1}^N w_{ij} \cdot v_j

where v_j are the value vectors and the weights w_{ij} ≥ 0 sum to 1 for each i, but are nonzero only for keys k_j “near” q_i:

  • If w_{ij} > 0, then k_j ∈ 𝒩(q_i, c·r) = { k ∈ {k_1, ..., k_N} \mid |q_i - k| ≤ c \cdot r }
  • If k_j is among the exact r-nearest neighbors of q_i, w_{ij} is lower-bounded by a positive constant inversely proportional to the candidate set size

In practical implementations, LSH-based techniques construct hash tables such that queries retrieve candidate keys efficiently from shared buckets. Algorithmic guarantees ensure that the failure probability in retrieving a true neighbor is small (η = O(1/N{1–3ρ})) and the runtime is:

O(mN1+3ρlogN)O( m \, N^{1+3\rho} \, \log N )

where ρ depends on the hash family and approximation factor, typically < 1/3.

2. Computational Complexity and Parallelism

Standard softmax attention incurs O(N²) complexity per sequence due to every query-key pairwise computation. ANNA achieves sub-quadratic complexity by using randomized hashing or approximate neighbor data structures such that only O(N{1+3ρ} \log N) pairs are considered for attention. This reduction is not merely heuristic: theoretical analyses show that the resulting transformers can simulate MPC algorithms with only constant-depth overhead and near-optimal parallel runtime.

This property is significant: it implies that ANNA-transformers retain the “parallelism” and circuit depth capabilities required to solve a wide range of tasks, including those that require distributed computation over the sequence.

3. Expressive Power and Reasoning Tasks

The representational power of ANNA-transformers matches that of standard transformers and other efficient attention variants. Two concrete reasoning tasks analyzed include:

  • Match2 Task: For every token, identify if there exists another token matching a specified relation (e.g., a numeric property modulo an integer). ANNA and even Exact Match Attention (EMA) compute this exactly in depth-1 with minimal dimension, leveraging the fact that matches are accessible as nearest neighbors.
  • k-Hop (Induction Heads) Task: Sequential multi-step dependencies where the model must “follow” chains of token relations (e.g., recall a token k steps prior). ANNA-solutions achieve near-optimal logarithmic depth (O(log k)), compared to standard transformer architectures that require Ω(log k) depth.

Notably, ANNA can simulate low-rank attention and reproduce the results of constant-depth low-rank transformers by suitable approximate neighbor selection, thus unifying these mechanisms.

4. Implementation Strategies: ANN Search in Attention

The practical implementation relies on fast ANN search methods:

  • Locality-Sensitive Hashing (LSH): Hashing keys and queries into buckets so that collisions are likely for “close” vectors. Multiple hash tables (ℓ) and hash functions (z per table) reduce failure rates. At inference, each query examines only the keys in its colliding buckets for attention score computation.
  • Algorithmic Details: The randomized attention operation is implemented in parallel, traversing multiple hash tables independently. The attention weights are normalized over retrieved candidates, and mechanism parameters (c, r, ℓ) are set according to desired approximation and runtime constraints. The model guarantees that every r-nearest neighbor of a query receives sufficient attention.

These steps allow scaling attention mechanisms for long sequences and large batch sizes.

ANNA contrasts with several other efficient attention strategies:

Mechanism Search Criterion Runtime Complexity Expressive Power
Softmax Attention All-to-all O(N²) Maximal
Low-rank Attention Dense approx, low-rank maps O(N m) Simulated by ANNA
Reformer-like LSH Hash buckets, sorted chunks O(N log N) Fails on some multi-output tasks
ANNA (LSH-based) Approximate nearest neighbor O(N{1+3ρ} log N) MPC-equivalent, multi-hop, induction ok

Unlike Reformer-style fixed chunk attention, which can miss true neighbors due to spatial sorting, ANNA ensures attention is only paid to “relevant” tokens based on embedding similarity, retaining capacity for functions such as averaging and distributed computation.

6. Connections to Massively Parallel Computation (MPC)

A central theoretical contribution is the equivalence between ANNA-transformers and MPC protocols:

  • Any R-round MPC algorithm (with restricted local memory) can be realized by an O(R)-layer ANNA-transformer of nearly sub-quadratic width.
  • Any computation by an ANNA-transformer can be simulated in distributed fashion via MPC within a comparable number of rounds and processors.

This equivalence provides a unified framework for reasoning about the capabilities and limitations of efficient attention mechanisms and situates ANNA as a general tool for scalable, distributed neural computation.

7. Implications and Future Directions

The theoretical guarantees and practical construction position ANNA as an efficient method for attention in large-scale neural models:

  • Enables transformer architectures to scale sequence length and embedding dimension without quadratic bottlenecks
  • Retains full expressive power for distributed tasks and complex reasoning—no significant loss compared to full attention
  • Provides a rigorous unification of efficient attention variants, creating a foundation for future designs in attention mechanisms that leverage randomized or approximate neighbor computation

Subsequent directions may include further optimizations in hashing architectures, adaptive neighbor selection, and integration with hardware for maximally parallel execution. In particular, tuning LSH parameters and dynamic candidate list sizing could improve both runtime and retrieval accuracy in practical settings. Furthermore, analysis of failure probabilities and attention weight distributions in high-dimensional spaces will inform robust and reliable deployment of ANNA in production models (Liu et al., 10 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Approximate Nearest Neighbor Attention (ANNA).