Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 78 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 189 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

LASP-2H: Hybrid Sequence Parallelism

Updated 4 October 2025
  • LASP-2H is a sequence parallelism technique for hybrid transformer architectures that integrate linear and softmax attention, optimizing training for very long sequences.
  • It replaces traditional ring-style communication with a single AllGather operation per layer, reducing latency and ensuring memory efficiency independent of sequence length.
  • Empirical benchmarks demonstrate enhanced scalability and speedups, achieving up to 36.6% faster training on multi-GPU clusters for million-token sequences.

LASP-2H is a sequence parallelism methodology for hybrid transformer architectures that integrate both linear and standard (softmax) attention mechanisms. It extends LASP-2, a method originally designed for efficient and scalable distributed training of linear attention models, and applies a unified communication redesign to enable efficient scaling on hybrid models, such as those blending linear attention and traditional transformer layers. The technique is specifically optimized for training extremely long input sequences on multi-GPU clusters, demonstrated via comparative benchmarks with Linear-Llama3 across up to 64 GPUs and 2,048,000-token sequences (Sun et al., 11 Feb 2025).

1. Unified Sequence Parallelism in Hybrid Attention Models

LASP-2H generalizes LASP-2’s communication and computation workflow redesign from linear attention to standard attention. In conventional sequence parallelism for transformer models, attention computation for long sequences is split across devices, often using ring-style communication, which results in multiple sequential point-to-point exchanges. This pattern restricts concurrent communication and computation, limiting scalability for both linear and standard attention variants.

LASP-2H modifies this paradigm by shifting all device communication to a single AllGather collective operation per attention layer, regardless of attention type:

  • Linear Attention: AllGather exchanges only the intermediate memory state MtM_t with dimension d×dd \times d, independent of sequence length.
  • Standard Attention: AllGather is applied to collect key (KtK_t) and value (VtV_t) tensors across devices.

This approach enables full parallelism between communication and computation, maximizing throughput and minimizing bottlenecks as sequence lengths and device counts increase.

2. Communication Pattern Redesign

At the algorithmic level, LASP-2H replaces ring communication with an AllGather-based protocol for both types of attention layers:

  • Linear Attention Computation:
    • Compute local query QtQ_t, key KtK_t, value VtV_t.
    • Derive intermediate memory state Mt=KtVtM_t = K_t V_t.
    • Apply AllGather across devices: each device receives all MtM_t.
    • Output: Ot=QtM1:tO_t = Q_t M_{1:t}, where M1:tM_{1:t} is the sum of all gathered memory states.
  • Standard Attention Computation:

    • Compute QtQ_t, KtK_t, VtV_t on each device’s data chunk.
    • AllGather KtK_t, VtV_t across devices.
    • Concatenate received keys and values, then compute attention locally:

    Ot=Softmax(QtKT/d)VO_t = \mathrm{Softmax}(Q_t K^T / \sqrt{d}) V - Supports attention masking schemes, e.g., document-level masks, due to full context availability after gathering.

Communication tensor sizes in both cases are independent of the global sequence length, scaling only with model dimension dd and device count, enabling efficient training of million-token sequences.

3. Computational and Memory Efficiency

The AllGather-based workflow permits overlapping communication with computation, facilitating improved resource utilization on large clusters, especially those with moderate interconnect speeds. LASP-2H limits total communication steps per iteration to two, regardless of the number of sequence splits:

Method # Communication Steps per Iteration Communication Tensor Size
Ring-style SP KK (number of splits/devices) Scales with sequence length
LASP-2 / LASP-2H 2 Intermediate d×dd\times d state or Kt,VtK_t, V_t (d×Kd\times K)

This reduction in communication cost and its independence from sequence length is a distinguishing property of LASP-2H, supporting larger model and sequence scales than previous SP schemes.

4. Empirical Performance and Scalability

LASP-2H’s performance was evaluated on the Linear-Llama3 model and hybrid transformer architectures with standard and linear attention mixes:

  • Speedup: On Linear-Llama3 (sequence length = 2,048K, 64 GPUs), LASP-2 achieves 15.2% faster training over LASP-1 and 36.6% over Ring Attention.
  • Convergence: Loss values and model accuracy match or slightly improve upon traditional attention models, demonstrating LASP-2H’s suitability for both linear and hybrid systems.
  • Scalability: The method maintains throughput advantages as device count (KK) and sequence length increase, showing pronounced gains at scales relevant for long context reasoning, retrieval, and in-context learning.

A plausible implication is that LASP-2H can facilitate efficient model scaling in distributed environments where network bandwidth or latency is a constraint.

5. Application Scenarios and Integration

LASP-2H is tailored for hybrid models that employ both linear attention layers (for context scalability) and standard transformer layers (for accuracy and expressiveness):

  • Recall-intensive applications: Models capable of processing extremely long contexts, including genomics, document-level LLMing, and long-horizon temporal tasks.
  • Distributed environments: Lasp-2H’s design is agnostic to cluster topology and accommodates various attention masks and intra-model variations, providing flexibility for integration with tensor or pipeline parallel frameworks.

The unified communication strategy permits simpler integration and compositionality with other parallelization paradigms, supporting diverse distributed system architectures.

6. Future Directions and Enhancements

LASP-2H suggests further refinement in communication–computation overlap, such as synchronizing AllGather operations with intra-chunk computations to eliminate residual overhead. There is motivation to assess its performance under heterogeneous hardware, diverse communication topologies, and in conjunction with advanced optimization frameworks (e.g., zero-redundancy optimizers, fully sharded data parallelism).

Exploration of alternative attention mixing ratios, extended hybrid architectures, and targeted application benchmarks—especially in extreme long-sequence domains—are plausible avenues for future research.

7. Summary of Methodology and Results

LASP-2H represents a converged approach for scalable distributed training in hybrid transformer models. By unifying sequence parallelism for linear and standard attention with all-gather communication, it achieves:

  • Constant-memory and linear-time training of long sequences.
  • Uniform communication patterns independent of sequence length.
  • Demonstrated empirical speedups and scalability (36.6% improvement over ring attention baseline for million-token sequences).
  • Applicability to both pure linear attention models and hybrids blending linear and softmax transformer layers.

This methodology enables accelerated, resource-efficient development of long-sequence neural architectures in distributed, multi-GPU settings (Sun et al., 11 Feb 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to LASP-2H.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube