Papers
Topics
Authors
Recent
2000 character limit reached

Higher-order Linear Attention (2510.27258v1)

Published 31 Oct 2025 in cs.LG, cs.AI, and cs.CL

Abstract: The quadratic cost of scaled dot-product attention is a central obstacle to scaling autoregressive LLMs to long contexts. Linear-time attention and State Space Models (SSMs) provide scalable alternatives but are typically restricted to first-order or kernel-based approximations, which can limit expressivity. We introduce Higher-order Linear Attention (HLA), a causal, streaming mechanism that realizes higher interactions via compact prefix sufficient statistics. In the second-order case, HLA maintains a constant-size state and computes per-token outputs in linear time without materializing any $n \times n$ matrices. We give closed-form streaming identities, a strictly causal masked variant using two additional summaries, and a chunk-parallel training scheme based on associative scans that reproduces the activations of a serial recurrence exactly. We further outline extensions to third and higher orders. Collectively, these results position HLA as a principled, scalable building block that combines attention-like, data-dependent mixing with the efficiency of modern recurrent architectures. Project Page: https://github.com/yifanzhang-pro/HLA.

Summary

  • The paper introduces Higher-order Linear Attention (HLA) that extends linear attention by efficiently incorporating higher-order interactions using compact prefix statistics.
  • It achieves linear-time per-token output with constant state size per head by leveraging efficient associative scan techniques for causal streaming.
  • The method enhances scalability in long-context autoregressive models by bridging expressive attention with memory-efficient recurrent architectures.

Higher-order Linear Attention

Introduction

The paper "Higher-order Linear Attention" introduces a novel mechanism, Higher-order Linear Attention (HLA), designed to overcome the computational and memory intensity associated with scaled dot-product attention in autoregressive LLMs. The key challenge addressed is the quadratic complexity inherent in traditional attention mechanisms, which limits scalability to long contexts. Linear-time attention variants and State Space Models (SSMs) exist as scalable alternatives, but these are often confined to first-order or kernel-based approximations that restrict expressivity. HLA extends linear attention by efficiently incorporating higher-order interactions through compact prefix statistics, offering substantial computational benefits while retaining expressivity.

Core Concepts

HLA is a causal, streaming attention mechanism that enhances traditional linear attention by utilizing higher-order interactions. In its second-order form, HLA maintains a constant-size state per head and computes per-token outputs in linear time, circumventing the need for n×nn \times n matrix constraints. This approach leverages low-order moments such as sums of key outer products, facilitating exact causal streaming without the need to construct full attention matrices. The paper provides closed-form streaming identities and outlines training schemes based on associative scans that exactly reproduce the activations of a serial recurrence.

The paper also presents extensions to third and higher orders, highlighting the scalability and versatility of HLA for long-context models. The proposed mechanism is detailed with comprehensive algebra of extended summaries that enforce strict causality in streaming updates, ensuring efficiency in both computation and memory usage.

Implementation Details

HLA's implementation is designed to be efficient and scalable. For the second-order case, the mechanism maintains a constant size state per head, requiring O(d2+d×dv)O(d^2 + d \times d_v) operations for each token, where dd is the query/key dimension and dvd_v is the value dimension. This efficiency is achieved without constructing A×AT\mathbf{A} \times \mathbf{A}^T matrices due to the clever structuring of prefix summaries and associative scan techniques.

The development of HLA involves:

  • Prefix Statistics: Compact representations that accumulate key and query interactions over time, allowing efficient streaming updates and minimal state size.
  • Masked Streaming Identities: These are essential for enforcing autoregressive causality, ensuring that HLA operates within the constraints required for autoregressive LLMs.
  • Associative Scans: An important component for parallel training, allowing the system to handle intra- and inter-chunk parallelism efficiently.

Practical Implications and Extensions

The efficient nature of HLA makes it a promising candidate for scalability in autoregressive models operating over long contexts, such as those found in NLP applications. It bridges the gap between attention-like data-dependent mixing and state-efficient recurrent architectures, making it a versatile tool for various sequence modeling tasks. By supporting higher-order interactions, HLA can potentially capture context-aware dependencies better than traditional linear attention or state-space models.

Future developments could explore further optimization of multi-order extensions and integration with other scalable sequence modeling frameworks, potentially broadening HLA's applicability across domains that require efficient long-sequence processing.

Conclusion

"Higher-order Linear Attention" delineates a scalable alternative to traditional and linear attention mechanisms. With its ability to handle higher-order interactions efficiently, HLA provides a robust foundation for deploying attention mechanisms in long-context autoregressive models, marrying the expressivity of attention with the efficiency of modern recurrent architectures. The paper's contributions pave the way for more expressive and computationally feasible models in natural language processing and beyond.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.