Papers
Topics
Authors
Recent
2000 character limit reached

Query-Broadcast Attention

Updated 9 December 2025
  • Query-broadcast attention is a mechanism that contracts token representations into a compact set and broadcasts critical information back to the full sequence for efficiency.
  • The CBSA method employs a two-step block-coordinate process—contraction and broadcast—to achieve linear computational complexity and improved interpretability.
  • In diffusion models, Pyramid Attention Broadcast reuses attention outputs across timesteps, significantly reducing redundant computation in video generation.

Query-broadcast attention encompasses a family of attention mechanisms that compress or cache attention computations—typically by contracting token representations into a compact set and then broadcasting information back to the full sequence—to improve efficiency and interpretability. Prominent methods include Contract-and-Broadcast Self-Attention (CBSA) for efficient representation learning in transformers and Pyramid Attention Broadcast (PAB) for minimizing redundant attention computation in diffusion-based video generation. Both approaches exemplify query-broadcast dynamics: a small set of queries (or representatives) are computed or identified, attention is computed on these, and their outputs are broadcast back, reducing the need for frequent or full attention computation.

1. Contract-and-Broadcast Self-Attention (CBSA): Theoretical Underpinnings

CBSA is derived from a unified optimization perspective, starting with a maximal coding-rate reduction (MCR²) objective that seeks to compress token sets into a compact yet informative representation. This approach explicitly formalizes the goals of interpretability and efficiency within self-attention layers.

The MCR² objective for tokens ZRd×NZ \in \mathbb{R}^{d \times N} is given as: maxZ  ΔR(Z)=R(Z)    k=1KR(UkZ),\max_{Z}\;\Delta R(Z) = R(Z)\;-\;\sum_{k=1}^K R(U_k^\top Z), where UkO(d,p)U_k \in O(d, p) spans KK incoherent pp-dimensional subspaces and R()R(\cdot) is a coding-rate function based on log-determinant regularization.

CBSA introduces a small set of representatives QkRd×mQ_k \in \mathbb{R}^{d \times m} per subspace, enforcing that compressing these provides equivalent information as contracting all tokens in that subspace. This yields the regularized objective: minZ  k=1KR(Qk)s.t.R(Qk)R(UkZ)τ,Qk=ZAk,\min_{Z}\; \sum_{k=1}^K R(Q_k) \quad\text{s.t.}\quad |R(Q_k)-R(U_k^\top Z)|\le\tau, \quad Q_k=Z A_k, with AkRN×mA_k\in\mathbb R^{N\times m} the contraction coefficients (Wen et al., 21 Sep 2025).

2. Algorithmic Realization: Contract–and–Broadcast Flow

CBSA unrolls a two-step block-coordinate optimization—contract and broadcast—into a purely feedforward operator:

  • Contraction: Project each token block into its pp-dimensional head subspace, extract mm representatives via cross-attention, then perform self-attention-based contraction over them. The contraction per head kk is:

Contractionk=UkQksoftmax((UkQk)(UkQk)).\text{Contraction}_k = U_k^\top Q_k \operatorname{softmax}\bigl((U_k^\top Q_k)^\top (U_k^\top Q_k)\bigr).

  • Broadcast: Broadcast the contracted information across all tokens using the attention coefficient transpose:

Broadcastk=Ak=[softmax((UkZ)(UkQk))].\text{Broadcast}_k = A_k^\top = \left[\operatorname{softmax}\left((U_k^\top Z)^\top (U_k^\top Q_k)\right)\right]^\top.

The final CBSA transformation is: CBSA(ZU)=k=1KUk[UkQksoftmax((UkQk)(UkQk))]Ak,\operatorname{CBSA}(Z\mid U) = \sum_{k=1}^K U_k\left[U_k^\top Q_k \operatorname{softmax}\left((U_k^\top Q_k)^\top (U_k^\top Q_k)\right)\right] A_k^\top, where contraction and broadcast are computed per attention head (Wen et al., 21 Sep 2025).

3. Complexity and Generalization

CBSA achieves linear complexity in the number of tokens NN for practical parameter regimes m,pNm,p\ll N, as compared to quadratic complexity in softmax self-attention. Specifically:

  • CBSA per-layer complexity: O(Nd2+Nmd+m2d)O(Nd2)O(Nd^2 + Nmd + m^2d) \approx O(Nd^2).
  • Memory usage: O(Nd+md)O(Nd + md) for activations, versus O(N2)O(N^2) for full softmax attention.
  • Special cases: CBSA recovers full softmax (when m=Nm=N and Ak=INA_k=I_N), linear attention (when QkQ_k are principal components of UkZU_k^\top Z), and channel attention (when QkQ_k are fixed orthonormal bases) (Wen et al., 21 Sep 2025).

4. Broadcast Attention and Redundancy in Diffusion

Broadcast attention has further applications in iterative architectures like diffusion models, where per-step computations exhibit high redundancy. In Pyramid Attention Broadcast (PAB) for DiT-based video diffusion, broadcast attention directly reuses the output of an attention layer over multiple timesteps, conditioned on measured redundancy.

  • Formal definition: If Ot0O_{t_0} is the output at broadcast origin t0t_0, then for τ=1,,B\tau = 1, \ldots, B, set Ot0+τOt0O_{t_0+\tau} \leftarrow O_{t_0} instead of recomputing attention at each step.
  • Redundancy diagnosis: The per-step output change Dt=E[OtOt12]D_t = E[\|O_t - O_{t-1}\|^2] is empirically U-shaped, with high variance at the ends but minimal in the central 70% of diffusion steps (Zhao et al., 22 Aug 2024).

5. Pyramid-Style and Distributed Broadcast Mechanisms

PAB adapts broadcast window sizes hierarchically according to attention type:

  • Spatial attention: Shortest broadcast range (BsB_s)
  • Temporal attention: Intermediate (Bt>BsB_t > B_s)
  • Cross attention: Longest (Bc>BtB_c > B_t)

The algorithm proceeds as follows:

1
2
3
4
5
6
7
for t in 1..T:
    for each attention type m in {S,T,C}:
        if stable_start  t  stable_end and (t - last_compute[m])  B[m]:
            O_m[t] = O_m[last_compute[m]]  # broadcast
        else:
            O_m[t] = Attention_m(X_t)      # full compute
            last_compute[m] = t

PAB further extends to distributed settings (broadcast sequence parallel), eliminating inter-GPU communication during broadcasted temporal attention steps, reducing required bandwidth and increasing throughput (Zhao et al., 22 Aug 2024).

6. Empirical Findings and Comparative Performance

CBSA

  • Compression via representatives: Coding-rate R(Z)R(Z) progressively decreases over CBSA layers, well-correlated with classification accuracy; contraction over a handful of representatives suffices.
  • Role of broadcast: Ablation experiments removing the broadcast step (i.e., using identity for AkA_k^\top) result in considerable accuracy degradation.
  • Emergent segmentation and robustness: Early layers show emergent object-segmentation properties; model is robust to perturbations of the basis UkU_k.
  • Efficiency: On ImageNet-1K, a CBSA-Small model matches ViT-Small accuracy (≈71.4%) with ∼40% of the pairwise similarity operations of standard attention.

PAB

  • Speedup: On single-GPU (Open-Sora, 30 steps, 480p video), PAB with (2,4,6) broadcasting achieves 1.34× acceleration with negligible quality drop (<1% VBench loss). More aggressive ranges (3,5,7) and (5,7,9) further improve speed at slight cost to quality.
  • Scaling: On 8×H100 GPUs, broadcast sequence parallel yields 10.6× end-to-end speedup, with ∼50% communication reduction.
  • Latencies: Attention operations constitute 10–20% of runtime, with attention-related overhead (norm, proj, reshape) ∼30%. PAB eliminates most of this overhead in redundant steps (Zhao et al., 22 Aug 2024).

7. Connections, Special Cases, and Implications

Both CBSA and PAB generalize classical attention via query-broadcast mechanisms:

  • CBSA as a unifying framework: Varies the number of representatives, projection structure, and contraction mechanics to interpolate between full, linear, and channel attention.
  • PAB in iterative models: Empirically validates that much attention activity is redundant across diffusion steps, motivating output-level reuse without retraining.

A plausible implication is that structured query-broadcast paradigms will continue to underlie both general-purpose transformers and domain-specific architectures where efficiency and interpretability are paramount. These methods further suggest that the contraction–broadcast decomposition is a powerful axis for analyzing and optimizing attention beyond mere computation reduction.


References:

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Query-Broadcast Attention.