Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 158 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 117 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Kimi Delta Attention: Efficient Long-Context Models

Updated 31 October 2025
  • Kimi Delta Attention (KDA) is a novel attention mechanism for transformers designed to support ultra-long contexts and multimodal integration using linear scaling.
  • KDA employs context-parallel, blockwise computation and a channel-wise gated delta rule to enhance retrieval accuracy, decoding throughput, and memory efficiency.
  • It seamlessly integrates with MoE and latent attention architectures, with open-source implementations demonstrating significant performance gains on extreme sequence lengths.

Kimi Delta Attention (KDA) refers to a family of attention mechanisms developed for efficient long-context, multimodal, and hardware-optimized transformer networks. Initially introduced in the context of Kimi-VL vision-LLMs, and subsequently advanced in linear attention research, KDA is tailored to achieve linear scaling in memory and computation, maintain state-of-the-art retrieval accuracy over extreme context lengths, and provide highly expressive memory control—all while supporting integration with efficient MoE and latent attention architectures. The following sections detail its formulation, theoretical underpinnings, empirical performance, architecture integration, and evolution in recent models.

1. Conceptual Foundation and Motivation

Kimi Delta Attention was created to address two principal issues in transformer-based long-context and multimodal architectures:

  1. The quadratic scaling of compute and memory associated with vanilla softmax attention, which limits practical context windows to ≲32K tokens.
  2. The requirement for seamless multimodal fusion and information retrieval across large input spaces, such as multi-page documents and high-resolution video.

Conventional solutions—block, windowed, and FlashAttention kernels—improve single-device throughput but fail to scale globally while retaining high fidelity retrieval or multimodal alignment. KDA’s primary objectives, as manifested in Kimi-VL and Kimi Linear, are to:

  • Provide native support for contexts of 128K tokens or longer.
  • Achieve linear memory scaling via context and hardware parallelism.
  • Integrate visual and textual token streams efficiently, with high retrieval performance (notably in “needle-in-a-haystack” tasks).

2. Mathematical Formulation and Architectural Mechanisms

Standard Attention (Reference)

For a token sequence of length NN, the canonical transformer attention is: Attention(Q,K,V)=softmax(QKdk)V\mathrm{Attention}(Q, K, V) = \mathrm{softmax}\left(\frac{QK^\top}{\sqrt{d_k}}\right)V with compute and memory costs of O(N2)\mathcal{O}(N^2). This formulation is problematic for large NN.

KDA in Kimi-VL: Context-Parallel, Blockwise Attention

KDA replaces the monolithic computation with a hardware-parallel strategy:

  • The sequence is partitioned across devices ("context ranks"), each processing a subset—e.g., L/nL/n tokens per rank for L=128KL=128K.
  • Within each rank, FlashAttention is used for maximal local efficiency.
  • Global AllReduce synchronizes context slices, approximating full attention while only computing necessary (“delta”) blocks per device.

Conceptual pseudocode:

1
2
3
4
5
6
L_per_device = L // n
for device in devices:
    attn_scores = compute_flash_attention(Q_chunk, K_chunk, V_chunk)
    # Blockwise/delta computation restricts attention to active blocks
    local_output = softmax(attn_scores / sqrt(d)) @ V_chunk
    output = allreduce(local_output)

Multimodal Integration

  • Vision tokens (projected from encoders such as MoonViT) are injected into the joint sequence, with packed masking and 2D/1D Rotary Positional Embeddings ensuring global alignment.

KDA in Kimi Linear: Channel-wise Gated Delta Rule

In Kimi Linear (Team et al., 30 Oct 2025), KDA denotes a linear attention operator based on a DeltaNet update, further enhanced by fine-grained (per-channel) gating: St=(Iβtktkt)Diag(αt)St1+βtktvt\mathbf{S}_t = (\mathbf{I} - \beta_t \bm{k}_t \bm{k}_t^\top) \operatorname{Diag}(\bm{\alpha}_t) \mathbf{S}_{t-1} + \beta_t \bm{k}_t \bm{v}_t^\top where αt[0,1]dk\bm{\alpha}_t \in [0,1]^{d_k} allows feature-dimension-specific forgetting, vastly generalizing scalar gating in previous Gated DeltaNet models.

Chunkwise parallelization further accelerates computation and aligns transitions to hardware-specific optimizations. Specialized Diagonal-Plus-Low-Rank (DPLR) updates maintain the classical delta rule structure for stability and throughput.

3. Performance, Scalability, and Efficiency

Benchmark Results and Scalability

Kimi-VL:

  • Supports up to 128K-token contexts natively, with linear memory and compute scaling.
  • Outperforms Qwen2.5-VL-7B (35.1 vs. 29.6% on MMLongBench-Doc), achieves 64.5% on LongVideoBench, and nearly perfect NIAH recall at 128K (87–91.7%).

Kimi Linear:

  • Achieves up to 6× higher decoding throughput at 1M tokens (1.84 ms vs. 11.48 ms per token compared to full attention).
  • Reduces key-value cache usage by up to 75%.
  • Outperforms full MLA attention on evaluated tasks, including long-context retrieval and RL scaling benchmarks, e.g., 84.3 vs. 81.3% on RULER 128k.

Empirical Summary Table

Model Context (tokens) Retrieval Accuracy Throughput Gain KV Cache Used
Kimi-VL 128K 87–91.7% NIAH ~1.6× Linear
Kimi Linear 1M Highest in class 6.3× ≤25%

4. Integration with Multimodal and Latent Architectures

KDA is tightly coupled with other high-efficiency transformer layers, including:

In multimodal deployments, KDA manages vision and language embeddings jointly, ensuring proper fusion and alignment even for ultra-high-resolution inputs.

5. Ablation Studies and Theoretical Insights

Models with KDA exhibit:

  • Superior test-time scaling; increasing sequence length yields better test accuracy, indicating effective utilization of extended memory (see scaling law and ablation plots).
  • Robustness: Models without KDA either fail (out-of-memory) or display marked collapse in long-context retrieval accuracy.
  • KDA’s per-channel gating provides greater expressiveness and resistance to catastrophic forgetting than per-head or blockwise-only variants.

A plausible implication is that the delta-based updates and fine-grained gating mechanisms are beneficial not just for speed and memory efficiency, but for overall model quality at extreme sequence lengths.

6. Implementation and Open Source Ecosystem

KDA implementations—context-parallel, FlashAttention, and chunkwise DPLR—are provided as open-source kernels through the vLLM and FLA repositories:

These are compatible with standard full-attention pipelines and require no interface changes for cache or scheduling.

7. Broader Significance and Future Directions

KDA mechanisms pursue a paradigm shift from quadratic-cost, monolithic transformer attention to a memory- and hardware-friendly regime that does not compromise performance over long contexts or multimodal inputs. The channel-wise gating innovation in Kimi Linear suggests future attention modules may further exploit fine-grained control for improved scaling laws, RL efficiency, and multimodal fusion. The context-parallel and latent compression designs indicate fruitful directions for large-scale deployment in real-world agent systems and next-generation VLMs.


Kimi Delta Attention is thus defined by its channel-wise, blockwise, and context-parallel computation making extremely long, multimodal, and efficient transformer attention practical and state-of-the-art (Team et al., 10 Apr 2025, Team et al., 30 Oct 2025, Tang et al., 21 Aug 2025).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Kimi Delta Attention (KDA).