Papers
Topics
Authors
Recent
Search
2000 character limit reached

Cottention: Linear Transformers with Cosine Attention

Updated 3 March 2026
  • Cottention is a cosine-based attention mechanism that normalizes queries and keys to achieve linear memory complexity.
  • It replaces quadratic softmax attention with cosine similarity, enabling efficient long-sequence and causal decoding.
  • Empirical benchmarks on models like BERT and GPT-J demonstrate near-parity in accuracy with significant memory and latency improvements.

Cottention denotes an attention mechanism for transformers that replaces the softmax-based scoring kernel with a cosine similarity kernel and exploits the resulting associativity to achieve native linear—and, for causal decoding, constant—inference-time memory with respect to sequence length. Developed as an alternative to traditional softmax attention, which imposes quadratic memory complexity that limits scalability on long sequences, Cottention demonstrates comparable expressivity on standard benchmarks while offering substantial memory and potential computational savings. The concept and architecture for Cottention were systematically presented and evaluated by Mongaras et al. in "Cottention: Linear Transformers With Cosine Attention" (Mongaras et al., 2024).

1. Motivation: Limitations of Softmax Attention in Transformers

Transformers leveraging self-attention have achieved state-of-the-art results across natural language processing and related domains, in part owing to the expressivity of the softmax-normalized dot-product attention kernel: Attentionsoftmax(Q,K,V)=softmax(QKTdk)V\mathrm{Attention}_{\text{softmax}}(Q, K, V) = \mathrm{softmax}\bigl(\tfrac{QK^T}{\sqrt{d_k}}\bigr)V where Q,KRN×H×s×dkQ, K \in \mathbb{R}^{N \times H \times s \times d_k}, VRN×H×s×dvV \in \mathbb{R}^{N \times H \times s \times d_v}, for batch size NN, number of heads HH, sequence length ss, and key/value dimensionality dkd_k, dvd_v. The O(s2)O(s^2) time and, more critically, memory cost of this mechanism, due to explicit storage of the s×ss \times s attention map for every head, becomes impractical for large ss, particularly during inference.

Cottention addresses this bottleneck by dispensing with the softmax normalization in favor of a cosine similarity kernel, enabling algebraic rearrangements that directly yield resource-efficient computation, crucial for long-sequence or streaming contexts (Mongaras et al., 2024).

2. Mathematical Formulation and Core Algorithm

Cottention replaces the softmax attention kernel by computing row-normalized queries and keys, followed by matrix multiplication: CosAttention(Q,K,V)=[N(Q)N(K)T]  V,N(X)=XX2,row\mathrm{CosAttention}(Q, K, V) = [\mathcal{N}(Q)\, \mathcal{N}(K)^T]\; V, \qquad \mathcal{N}(X) = \frac{X}{\|X\|_{2,\text{row}}} Here, each query and key vector is L2L^2-normalized row-wise. Cosine similarity is thus computed as the dot product of unit vectors: cos(q,k)=qkq2k2\cos(q, k) = \frac{q \cdot k}{\|q\|_2\,\|k\|_2} To mitigate the scale growth of summed similarities with sequence length, a scalar parameter mm is trained per head; the output is stabilized by dividing VV through sσ(m)s^{\sigma(m)} (where σ\sigma is the sigmoid function), yielding: CosAttention(Q,K,V)=[N(Q)N(K)T][V/sσ(m)]\mathrm{CosAttention}(Q, K, V) = [\mathcal{N}(Q)\mathcal{N}(K)^T][V / s^{\sigma(m)}]

By associativity, one can compute [N(K)TV][\mathcal{N}(K)^T V] first (shape H×s×dvH \times s \times d_v), then multiply by N(Q)\mathcal{N}(Q), bypassing O(s2)O(s^2) memory storage and reducing memory to O(sdv+dkdv)O(s d_v + d_k d_v). For bidirectional attention, this yields linear scaling in ss.

3. Causal Masking, RNN Reformulation, and Inference Efficiency

For autoregressive (causal) attention, direct factorization is blocked by the triangular mask. The Cottention algorithm circumvents this by reformulating causal attention computation as a recurrent neural network:

  • The hidden state at step tt is HtRN×H×dv×dkH_t \in \mathbb{R}^{N \times H \times d_v \times d_k}, tracked by:

Ht=Ht1+KtVtH_t = H_{t-1} + K_t \otimes V_t

  • The output for token tt is:

Ot=i=1dk[QtHt]:,iO_t = \sum_{i=1}^{d_k} [Q_t \odot H_t]_{:,i}

For streaming or stepwise inference, only HtH_t need be stored and updated, so total memory remains O(dvdk)O(d_v d_k), independent of ss. This property eliminates the need for storing or recomputing the full past KK, VV tensors (“kv-caching”) required by softmax attention.

A custom CUDA kernel implements this algorithm with one thread block per head-row and per-step accumulations, storing only dk×dvd_k \times d_v floats per head, enabling low-latency inference.

4. Computational Complexity Analysis

Cottention’s memory and time complexity are outlined in the following table:

Mechanism Training Memory Inference Memory (causal) Time per step
Softmax attention O(s2)O(s^2) O(s2)O(s^2) O(s2d)O(s^2 d)
Cottention (bidirectional) O(sdv+dkdv)O(s d_v + d_k d_v) O(sdv+dkdv)O(s d_v + d_k d_v) O(sd2)O(s d^2)
Cottention (causal, inf.) O(dvdk)O(d_v d_k) (const.) O(dvdk)O(d_v d_k) (const.) O(d2)O(d^2)

Bidirectional Cottention provides linear memory in ss; in causal (autoregressive) inference, the memory footprint is constant in ss, whereas softmax always requires O(s)O(s) cache.

5. Empirical Evaluation and Benchmarking

Cottention was benchmarked as a drop-in replacement for softmax attention in both BERT (bidirectional) and GPT-J (causal) architectures. Empirical results show:

  • On GLUE for BERT, Cottention attains scores within approximately 1.3 points of standard softmax attention (average), indicating near-parity in downstream task accuracy.
  • In GPT-J next-token prediction experiments on The Pile, both 300M and 1.2B parameter models achieve final perplexities nearly identical to softmax attention (e.g., 1.2B: softmax \approx 9.5, Cottention \approx 9.6).
  • Empirical measurements on A100 GPUs confirm the predicted linear/constant scaling of memory usage with sequence length for Cottention, versus quadratic for softmax.
  • Wall-clock times favor Cottention for long sequences (when sds \gg d), though for high dd and short ss softmax’s lower multiplicative work can yield slightly lower training times.

Stabilization hyperparameters mm converge to $0.1$–$0.2$ per head after training from an initialization at $0.5$, indicating reduced reliance on normalization at convergence.

6. Implementation and Practical Details

Mongaras et al. provide a fully detailed CUDA kernel for Cottention, exploiting fused operations and memory locality to minimize both peak memory and compute time. Backpropagation is handled via a closed-form reversal of the forward cumulative-sum steps for QQ, KK, VV gradients. No intermediate s×ss \times s or s×ds \times d arrays are stored; only the minimal recurrent state is maintained throughout.

This design supports easy integration into existing transformer codebases as a drop-in replacement for standard attention modules.

7. Implications and Future Directions

Cottention is distinguished by its ability to match the modeling capacity of softmax attention while reducing memory scaling, especially at inference where constant memory enables long-context generation and streaming. The RNN perspective suggests synergies with continual or online transformers and potentially hybrid architectures incorporating LSTM- or GRU-like gating on the incremental state.

Future work includes scaling Cottention to >$10$B parameter models, optimizing kernel-level compute to close any remaining throughput gaps versus specialized fast-attention implementations (e.g., FlashAttention), experimenting with alternative normalization schedules, and exploiting Cottention’s algebraic structure for low-rank or matrix-factorized key–value pathways.

This reconceptualization of the attention mechanism paves the way to more efficient transformer architectures, especially for resource-constrained or real-time sequence modeling scenarios (Mongaras et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Cottention.