Gated KalmaNet: Full-History, Linear-Memory Model
- Gated KalmaNet is a neural sequence layer that fully conditions on past inputs by formulating sequence updates as an online ridge regression problem solved with Chebyshev iteration.
- It achieves constant-memory, linear-time computation while employing adaptive regularization and gating mechanisms to ensure numerical stability and precise long-range recall.
- Empirical evaluations reveal that GKA outperforms fading-memory models on long-context tasks and scales efficiently on modern accelerators with ultra-long sequences.
Gated KalmaNet (GKA) is a neural sequence layer that bridges the performance gap between quadratic-cost softmax attention and linear-memory fading-memory state-space models (SSMs). GKA achieves constant-memory, linear-time computation while conditioning the output at each timestep on the complete sequence history, leveraging test-time online ridge regression solved via a numerically stable Chebyshev iteration. This approach retains the efficiency and scalability of SSMs while enabling exact recall of the entire context, addressing limitations inherent in previous methods.
1. Motivation and Relation to Prior Architectures
Traditional softmax attention mechanisms, as used in Transformers, maintain explicit access to all past key–value pairs, enabling “eidetic” memory at quadratic cost in sequence length. This renders ultra-long-context inference (≫10K tokens) computationally expensive and often impractical. Linear SSM layers such as RetNet, Mamba2, DeltaNet, and GLA replace the attention memory with a fixed-size state , updated by
eliminating the quadratic memory cost and reducing per-token computation to . However, because , the effective state retains only a fading, lossy summary of the distant past, resulting in inferior performance on tasks that require precise, long-range recall.
GKA is designed to preserve the compute and memory efficiency of linear SSMs while, at each timestep, exactly conditioning on all prior inputs. This is accomplished by formulating the sequence model update as a test-time online ridge regression over the entire history, systematically overcoming the recall limitations of fading-memory models (Peng et al., 26 Nov 2025).
2. Mathematical Formulation
At each time , GKA computes a state by solving a regularized, weighted least-squares regression in dual (information) form: where, for each step :
- are the key and value vectors,
- are learned exponential fading weights,
- provides Tikhonov regularization.
The analytic solution is
with the identity matrix. Output for query is , with .
By using all past (with appropriately chosen or learned exponential fading weights), GKA departs from conventional SSMs’ fixed state summaries, providing a theoretically optimal solution to full-history regression under linear-memory constraints.
3. Adaptive Regularization and Gating Mechanisms
Numerical stability of ridge regression deteriorates on long sequences due to the increasing condition number of the matrix . Uniform regularization can lead to either loss of memory (if is too large) or instability (if too small). GKA addresses this with an adaptive regularization schedule: where is a learnable or set hyperparameter. This binds the condition number to a constant , ensuring numerical tractability across sequence lengths and avoiding catastrophic forgetting or gradient instability.
Additionally, recency bias and flexible memory can be learned using an input-conditioned gating architecture. The per-token fading weights are parameterized as
with the sigmoid activation and an internal summary state. This product form ensures exponential decay in memory contribution, implemented efficiently in memory per token.
Ablation studies reveal that omitting adaptive regularization causes spiky gradients and training collapse; removing gating degrades recall by 7–10% on retrieval tasks. These components are thus essential to GKA’s performance and stability (Peng et al., 26 Nov 2025).
4. Chebyshev Iteration and Numerical Solvers
Direct inversion or Cholesky decomposition of scales as , impractical for modern architectures and large . GKA substitutes this with iterations of the Chebyshev method, which offers the following properties:
- Complexity per step: via matrix-vector products and rank-one updates.
- Convergence in steps, where is the condition number.
In Chebyshev iteration: and for
for , , .
Compared to conjugate gradient, Chebyshev iteration is more robust in low-precision environments (such as bfloat16) because it avoids ill-conditioned momentum scaling. The recurrence’s backward structure allows the backward pass to be computed without storing all intermediate iterates, reducing memory overhead.
5. Hardware-Aware Implementation
To further optimize for modern accelerators, GKA uses chunk-wise state management:
- The token stream of length is divided into chunks of size .
- Only the states are materialized at chunk boundaries ().
- Within each chunk, key matrices, gram matrices, and the Frobenius norms are updated in parallel, supporting efficient Chebyshev iterations.
- The Frobenius norm is maintained using block/triangular masks and cumulative product vectors from local chunked key sets, avoiding space per token.
Backward gradients are handled via implicit differentiation and reapplication of Chebyshev iterations on transposed systems, allowing efficient memory usage and low-latency training on hardware such as GPUs and TPUs.
6. Computational Complexity
GKA’s per-token and total complexity is summarized in the following table:
| Operation | Per-Token Complexity | Memory Requirement |
|---|---|---|
| Chebyshev | per chunk state | |
| Update , | ||
| Total |
Here is the number of Chebyshev iterations (typically 20–30, independent of or ). This yields linear compute in sequence length and constant memory with respect to context length, aligning with the best-case characteristics of SSMs.
7. Empirical Findings and Extensions
Empirical results establish GKA’s state-of-the-art performance among linear-memory models:
- On synthetic associative-recall (MQAR) tasks (up to 8K tokens), GKA outperforms Mamba2, GLA, and Gated DeltaNet by 5–10 points in recall accuracy at matched state dimensions.
- For short-context language modeling (LM-Harness, 2.8B parameter regime, FDA and SWDE tasks), GKA surpasses SSM baselines by approximately 10% relative, approaching the performance of full softmax Transformers.
- On long-context applications such as retrieval-augmented generation (RAG) and LongQA (up to 128K tokens), GKA delivers >10% relative improvement over fading-memory methods, with recall competitive to full attention up to 32K tokens.
- Ablation analyses show that both adaptive regularization and gating are necessary for stability and superior recall; Chebyshev iteration is indispensable for reliable low-precision training and inference.
Further, the architecture supports several prospective enhancements:
- Sketching the normal equations to dimension can accelerate Chebyshev iterations by about 10% throughput, with under 1% accuracy loss.
- Hybrid designs that alternate full-attention heads with GKA yield additional recall gains at low incremental cost.
- Extensions to kernelized or non-linear ridge regression (deep test-time optimization) are open research directions.
- Scaling to architectures above 10B parameters and integrating efficient inference schemes (prefix caching, custom kernels) are promising for increased deployment.
GKA thus operationalizes full-history regression within a linear-memory, hardware-friendly architecture, substantially mitigating the historical tradeoff between efficiency and memory retention in neural sequence modeling (Peng et al., 26 Nov 2025).