Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lag-Relative Sparse Attention (LRSA)

Updated 18 March 2026
  • LRSA is a structured sparse attention mechanism that dynamically selects top-K past tokens within fixed-size lag windows to reduce computational cost.
  • It integrates dynamic top-K selection with static masks to maintain consistency between training and inference, benefiting language models and time-series causality.
  • Empirical results demonstrate superior performance in LLM context compression and Granger causality detection compared to full attention and fixed-lag VAR models.

Lag-Relative Sparse Attention (LRSA) is a structured sparse attention mechanism designed to address the challenges of long-sequence modeling and temporal reasoning in both language modeling and multivariate time-series analysis. LRSA implements a dynamic top-K selection strategy that operates within fixed-size lag windows, allowing models to efficiently focus on the most salient historical context while significantly reducing computational and memory overhead. Two primary domains utilize LRSA: scalable training and inference for long-context LLMs and causal discovery in temporal data via Granger causality.

1. Motivation and Principle

The principal challenge addressed by LRSA is the resource bottleneck imposed by full self-attention, whose time and memory complexity scale quadratically and linearly, respectively, in the sequence length NN (O(N2)O(N^2) compute, O(Nd)O(Nd) memory) (Liang et al., 13 Jun 2025). In LLM inference, this has motivated key-value (KV) cache compression, yet applying compression only at inference (e.g., cache truncation or LagKV) induces severe test-train mismatch and performance loss. In time-series analysis, conventional approaches such as Vector Autoregression (VAR) with fixed lags can neither adaptively select relevant historical steps nor efficiently ignore noise, which is particularly problematic for real-world data where lag structures vary unpredictably (Mahesh et al., 2024).

LRSA unifies these needs by enabling:

  • Sparse, differentiable attention over past context: each query attends to a dynamically chosen subset of keys/values from a lagging window.
  • Integration into both training and inference: the same sparsity pattern is applied throughout, avoiding generalization gaps.
  • Dynamic lag adaptation: models select salient time steps or tokens within a moving window, rather than relying on static lag boundaries.

2. Formal Algorithmic Structure

2.1. LRSA for Long-Context LLMs

Given sequence X=[x1,…,xN]X = [x_1, \dotsc, x_N] and lag window of size LL, LRSA divides the input into consecutive chunks. For each chunk pp:

  1. Key-Value scoring: For Kp,Vp∈RL×dhK^p, V^p \in \mathbb{R}^{L \times d_h}, scores are assigned per entry using lag-relative normalization and the standard deviation across channels. A softmax is applied to the per-entry standard deviation; the total score is the sum of key and value scores.
  2. Top-K retention: With retention ratio rr (K=⌊rL⌋K = \lfloor r L \rfloor), only the top K-scoring keys/values are retained into the global cache for attention by future queries.
  3. Static sparsity masking: Sparse block-triangular masks are constructed so each query only attends to the retained positions. The resulting attention for each chunk is: A(p)=softmax(Q(p)(Kcache)⊤/dh+M(p))A^{(p)} = \mathrm{softmax}\left(Q^{(p)} ({K}_{\text{cache}})^\top / \sqrt{d_h} + M^{(p)}\right) where M(p)M^{(p)} zeros out (i.e., assigns −∞-\infty) indices not in the retained set.

2.2. LRSA for Granger Causality in Time Series

LRSA is implemented as a two-stage attention module within a Sparse Attention Transformer (SAT) (Mahesh et al., 2024):

  • Stage 1: Temporal attention. At each time tt, the input is a sliding window X′=[Xt−2k,...,Xt−1]∈R2k×DX' = [X_{t-2k}, ..., X_{t-1}] \in \mathbb{R}^{2k \times D}. Scaled dot-product attention is masked to be strictly causal (lower-triangular), and the top-k most important past time steps are selected based on column-summed attention weights.
  • Stage 2: Cross-variable attention. The top-k time steps are transposed to X′′′∈RD×kX''' \in \mathbb{R}^{D \times k}, treating each variable as a token. Standard attention is applied across variables, yielding the prediction vector. During inference for causality detection, masking is employed to simulate exclusion of individual variables and compare predictive variances, producing the Conditional Granger Causality Index (CGCI).

3. Computational Efficiency and Memory Profile

The LRSA mechanism provides asymptotic reductions in both compute and memory:

  • LLM context compression: For chunk size LL and retention ratio rr, the time complexity is O(N2rdh/L)O(N^2 r d_h / L) (assuming chunk stride C≈LC \approx L), with memory usage O(rNdh)O(r N d_h) (Liang et al., 13 Jun 2025).
  • Time-series transformers: The dual-stage attention design reduces the effective number of tokens/steps per attention operation, controlled by hyperparameters kk (selected top steps) and $2k$ (window size) (Mahesh et al., 2024).

A summary of the resource scaling:

Configuration Time Complexity Memory Complexity
Full attention O(N2dh)O(N^2 d_h) O(Ndh)O(N d_h)
LRSA (LLM) O(N2rdh/L)O(N^2 r d_h / L) O(rNdh)O(r N d_h)

4. Implementation Details

LRSA is implemented as a post-training intervention with no additional parameters, requiring only static masks and top-K routines. Notable implementation features include:

  • Frameworks: Megatron-LM with MindSpeedLLM, running on Huawei Ascend 910B NPUs; batched chunk prefetching (2-chunk fill) to amortize top-K computation (Liang et al., 13 Jun 2025).
  • Hyperparameters: Typical lag-window L=1024L=1024, r=0.5r=0.5, batch size 16, chunk size matching window size. For temporal causality, $2k$ is set as twice the maximal expected lag, and kk controls sparsity (Mahesh et al., 2024).
  • Masking strategies: Static block-triangular for LLMs; lower-triangular (time) and column (variable) masking in causality models.

5. Empirical Performance and Ablation

5.1. Long-Context Language Modeling

Experimental evaluation on Qwen2.5-1.5B-Base (32K context) using four synthetic long-text tasks (LongAlign-10k, LongAlpaca-12k, RefGPT-Fact-v2-8x, Anti-Haystack) demonstrated:

  • Stable convergence: LRSA yields training loss comparable to full attention, with no instability (Liang et al., 13 Jun 2025).
  • Robustness to compression: Under 2–4× context compression, LRSA-fine-tuned models maintain QA scores 3–6 points higher than vanilla at 8K/16K context. Average performance at 4× compression (32K) is ≈42 vs. ≈36 (vanilla).
  • Ablation: Varying rr shows a direct trade-off—smaller rr saves memory at a cost to accuracy, but LRSA consistently dominates full attention for each rr.

5.2. Granger Causality in Multivariate Time Series

Empirical results on synthetic datasets (max lag=10, D=4,5,6,10D=4,5,6,10, T=150T=150) compared SAT-LRSA to fixed-lag VAR:

  • On D=4:AUC-ROC=0.77D=4: \mathrm{AUC\text{-}ROC}=0.77 (LRSA) vs $0.47$ (VAR); F1 $0.72$ vs $0.63$
  • On D=10:AUC-ROC=0.78D=10: \mathrm{AUC\text{-}ROC}=0.78 (LRSA) vs $0.63$ (VAR); F1 $0.65$ vs $0.62$ LRSA’s dynamic k-selection adaptively ignores noisy or irrelevant lags, yielding higher accuracy and more interpretable causality matrices (Mahesh et al., 2024).

6. Limitations and Discussion

LRSA employs query-independent pruning; all queries within a chunk attend only to tokens retained by top-K scoring based on key/value statistics. Unlike query-aware strategies (e.g., QUEST), this can potentially overlook context critical for certain queries (Liang et al., 13 Jun 2025). Accurate tradeoff between efficiency and recall requires careful tuning of LL and rr. In causality discovery, multi-head attention and positional encoding are central for effective temporal ordering but are not explicitly extended in the current LRSA framework (Mahesh et al., 2024).

7. Prospects and Future Directions

Key avenues for extending LRSA include introducing query-aware dynamic retention, parameterizing (and thus learning) the retention ratio or scoring function, and generalizing the method to multi-modal sequences such as images or videos. Integration with differentiable attention enables end-to-end training under the same mask regime used at inference, which is not achievable with non-differentiable compression approaches (Liang et al., 13 Jun 2025). A plausible implication is that further research into learned or query-adaptive sparsity will improve both efficiency and recall, particularly for real-world tasks exhibiting highly nonstationary or multimodal lag structures.


References:

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Lag-Relative Sparse Attention (LRSA).