Papers
Topics
Authors
Recent
2000 character limit reached

FAVOR+: Scalable Self-Attention via Positive Features

Updated 29 November 2025
  • The paper introduces FAVOR+, a random-feature-based self-attention mechanism that approximates the softmax operator with linear time and space complexity.
  • It employs positive orthogonal random features to ensure unbiasedness and controlled variance, enabling stable kernel approximations.
  • The method integrates seamlessly into Transformer architectures, providing competitive accuracy and dramatically improved scalability for long sequence modeling.

FAVOR+ (Fast Attention Via positive Orthogonal Random features) is a random-feature-based self-attention mechanism designed to provide unbiased and scalable approximations to the softmax operator in Transformers. By leveraging positive orthogonal random features, FAVOR+ achieves linear complexity in both time and space without compromising theoretical properties such as unbiasedness or uniform convergence. This attention mechanism is a core component of the Performer architecture, enabling application to long contexts where standard quadratic attention becomes infeasible (Choromanski et al., 2020).

1. Problem Statement and Motivation

The computational bottleneck in standard Transformer self-attention arises from the explicit formation of the L×LL\times L attention matrix A=softmax(QK/d)A = \operatorname{softmax}(QK^\top/\sqrt{d}), leading to O(L2d)O(L^2 d) time and O(L2)O(L^2) space complexity for input length LL and feature dimension dd. These requirements prohibit scaling to sequences beyond a few thousand tokens.

FAVOR+ addresses this by exploiting the kernel identity exp(xy)=EωN(0,Id)[exp(ωx12x2)exp(ωy12y2)]\exp(x^\top y) = \mathbb{E}_{\omega \sim \mathcal{N}(0, I_d)}[\exp(\omega^\top x - \frac{1}{2}\|x\|^2)\exp(\omega^\top y - \frac{1}{2}\|y\|^2)], expressing softmax as the expectation of a product of positive random features. This linearizes the attention computation as matrix–matrix products over feature expansions of QQ and KK, yielding linear-time and -space algorithms (Choromanski et al., 2020).

2. Positive Orthogonal Random Feature Construction

The cornerstone of FAVOR+ is the use of positive random features, specifically the mapping: ϕ+(x)=m1/2exp(x2/2)[exp(ω1x),,exp(ωmx)]Rm,\phi^{+}(x) = m^{-1/2} \exp(-\|x\|^2/2) \left[ \exp(\omega_1^\top x), \ldots, \exp(\omega_m^\top x) \right] \in \mathbb{R}^m, where ωiN(0,Id)\omega_i \sim \mathcal{N}(0, I_d) or another isotropic distribution. This yields the crucial unbiasedness property: E[ϕ+(q)ϕ+(k)]=exp(qk).\mathbb{E}[\phi^+(q)^\top \phi^+(k)] = \exp(q^\top k).

Variance can be further reduced by extending to hyperbolic features,

ϕhyp+(x)=(2m)1/2ex2/2[eω1x,eω1x,,eωmx,eωmx]\phi^{\text{hyp}+}(x) = (2m)^{-1/2} e^{-\|x\|^2/2}[e^{\omega_1^\top x}, e^{-\omega_1^\top x}, \ldots, e^{\omega_m^\top x}, e^{-\omega_m^\top x}]

or by using Orthogonal Random Features (ORF), which choose {ωi}\{\omega_i\} to be orthonormal, reducing redundancy and thus variance without breaking isotropy (Choromanski et al., 2020).

3. FAVOR+ Attention Algorithm and Computational Complexity

The FAVOR+ mechanism replaces the quadratic attention step with a sequence of matrix multiplications involving these random features:

  1. Compute Q=ϕ(Q)Q' = \phi(Q) and K=ϕ(K)K' = \phi(K), both of shape L×mL\times m.
  2. Compute S=(K)VRm×dS = (K')^\top V \in \mathbb{R}^{m\times d}, Z=QSZ = Q' S.
  3. Compute the normalization vector t=Q(K1L)RLt = Q' (K'^\top 1_L) \in \mathbb{R}^L.
  4. Output attention values as diag(t)1Z\operatorname{diag}(t)^{-1} Z.

Causal (autoregressive) attention can be supported with slight algorithmic modifications (e.g., parallel prefix-sum). For non-autoregressive attention, the total complexity is O(Ldm)O(L d m) in both time and working memory, with mLm \ll L controlling the trade-off between accuracy and efficiency (Choromanski et al., 2020).

4. Theoretical Guarantees and Analytical Properties

FAVOR+ maintains several important theoretical properties:

  • Unbiasedness: E[ϕ+(q)ϕ+(k)]=exp(qk)\mathbb{E}[\phi^+(q)^\top \phi^+(k)] = \exp(q^\top k) holds for any q,kq, k.
  • Variance Bounds: For i.i.d. draws of ωi\omega_i, the mean-squared error (MSE) is analytically characterized. Use of orthogonal features (ORF) provably reduces the MSE compared to independent sampling and classic trigonometric random features, the latter of which are numerically unstable in softmax-attention.
  • Uniform Convergence: Provided m=O((d/δ2)log())m = O((d/\delta^2)\log(\cdots)), the supremum norm A^A\|\hat{A}-A\|_\infty can be controlled with high probability over compact support. These results ensure that attention computation approximates the full softmax with strong precision guarantees, even for large LL (Choromanski et al., 2020).

5. Comparison with Other Random Feature Attention Mechanisms

The Performer family distinguishes several random feature approaches:

Method Random Features Variance
TrigRF Cosine/sine High, unstable
FAVOR+ Positive exp Stable, but variance can be high
FAVOR++ GERF (scalar aa) Variance reduced (e2e^{2}e3e^{3})
FAVOR#\# SDERF/ADERF (matrix) Up to e10e^{10} variance reduction

Classic trigonometric random features (TrigRF) exhibit numerical instability due to negative values in the feature map. FAVOR+ improves upon this by using strictly positive exponential features, stabilizing computation but still incurring significant variance. FAVOR++ (GERF) introduces a single tunable scalar inside the exponent to reduce variance. Further, FAVOR#\# generalizes this approach with matrix parameters, enabling variance reduction by up to e10e^{10} on challenging datasets (Likhosherstov et al., 2023).

6. Practical Integration and Empirical Results

FAVOR+ is implemented as a direct replacement of softmax attention in multi-head self-attention blocks. This does not require modifications to adjacent network components such as feed-forward layers or normalization. The Performer architecture, which employs FAVOR+, demonstrates:

  • Linear scaling in sequence length for both training and inference (2–4×\times faster backward pass on long sequences, O(Ld)O(L d) vs O(L2)O(L^2) memory).
  • Empirical competitive accuracy with standard softmax Transformers across text, vision, and protein modeling tasks.
  • Stability and generality, handling pre-trained Transformer finetuning and compatibility with reversible layers, LSH, and clustering (Choromanski et al., 2020).

Empirical studies confirm that variance minimization via orthogonal features and positive exponential maps yields sharp reductions in mean-squared error and improved accuracy, especially for long sequences outside the reach of standard architectures.

7. Significance and Context within Efficient Attention Research

FAVOR+ establishes a foundation for scalable self-attention without reliance on sparsity, low-rank approximations, or kernel sparsification. Its positive feature construction with provable uniform convergence and unbiasedness addresses long-standing issues of stability and variance in kernel approximations for attention. FAVOR+ directly enables efficient kernelized attention mechanisms in large-scale neural sequence modeling.

The subsequent development of FAVOR#\# and advanced feature parameterizations (such as SDERF and ADERF) further reduces variance and adapts to empirical data distributions via closed-form optimization, as shown in "FAVOR#: Sharp Attention Kernel Approximations via New Classes of Positive Random Features" (Likhosherstov et al., 2023). This positions FAVOR+ and its successors as central techniques in linearizing attention for diverse deep learning applications requiring robust and scalable sequence modeling.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to FAVOR+ Self-Attention.