Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 113 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Compressed Convolutional Grouped Query Attention

Updated 7 October 2025
  • The paper introduces CCGQA, integrating latent-space compression with grouped query attention to significantly cut compute costs and memory usage in transformers.
  • It employs convolutional down-projection, specialized q–k mean operations, and head sharing to enable tunable trade-offs between FLOP reduction and cache efficiency.
  • Empirical results demonstrate up to 8× KV-cache reduction and 4× FLOP savings on state-of-the-art GPUs while sustaining or enhancing model quality.

Compressed Convolutional Grouped Query Attention (CCGQA) is an attention mechanism designed to simultaneously reduce the memory and compute costs of transformer models, particularly in long-context regimes. CCGQA integrates two methodological streams—latent-space compression of attention (Compressed Convolutional Attention, CCA) and parameter (head) sharing from Grouped Query Attention (GQA)—to perform all attention operations inside a compressed latent space with additional weight sharing across grouped heads. This dual compression strategy tightens the compute–memory Pareto frontier and delivers a tunable trade-off between computational intensity and cache size, all while preserving or even improving model quality relative to matched baselines (Figliolia et al., 6 Oct 2025).

1. Conceptual Foundations and Motivation

CCGQA was proposed to address the inefficiencies inherent in standard multi-head self-attention, which exhibits quadratic compute scaling in sequence length and a cache size growing linearly with both sequence length and hidden dimension. Existing schemes such as GQA reduce KV-cache size by grouping heads to share key/value parameters, thereby reducing redundant memory storage, whereas Multi-Latent Attention (MLA) and related latent-space approaches compress keys/values into a smaller latent representation but often incur additional up-projection cost and complications with positional encodings (Figliolia et al., 6 Oct 2025).

CCGQA achieves a more comprehensive efficiency improvement by (1) projecting queries, keys, and values to a compact latent space using linear and convolutional operations, and (2) performing the full attention computation within this space, augmented by GQA-style head grouping. This approach supports different compression rates for queries and keys/values, enabling users to select operating points along both FLOP and memory dimensions.

2. Technical Design and Mathematical Formulation

CCGQA consists of a sequence of down-projection, convolutional mixing, head grouping, and specialized value and q–k-average transformations.

Let xRS×Ex \in \mathbb{R}^{S \times E} be the hidden state (sequence length SS, embedding dimension EE). The core steps are:

  1. Linear Down-Projection:

q~=W~Qx,k~=W~Kx,v~=W~Vx\tilde{q} = \tilde{W}_Q x, \qquad \tilde{k} = \tilde{W}_K x, \qquad \tilde{v} = \tilde{W}_V x

where W~Q,W~K,W~VRE×d~\tilde{W}_Q, \tilde{W}_K, \tilde{W}_V \in \mathbb{R}^{E \times \tilde{d}}, with d~=E/C\tilde{d} = E / C for compression factor CC.

  1. Convolutional Mixing (Two-Step):

q~convseq(q~),q~convseq+ch(q~)\tilde{q} \leftarrow \text{conv}_{\text{seq}}(\tilde{q}), \quad \tilde{q} \leftarrow \text{conv}_{\text{seq+ch}}(\tilde{q})

Similar operations are applied to k~\tilde{k}, with convolutions spanning both sequence and channel dimensions.

  1. Grouped Query Attention in Latent Space: Key and value heads are shared across query head groups (e.g. 4 query heads per group). Let GG denote the grouping factor; within each group, all query heads use a shared key and value.
  2. q–k Mean Operation: A form of bias injection and residual averaging is performed between unmodified queries and keys (or their grouped versions):

qk~μ=12(q~pre+Bgroup(k~pre))\tilde{qk}_\mu = \frac{1}{2}(\tilde{q}_{\text{pre}} + B_{\text{group}}(\tilde{k}_{\text{pre}}))

q~q~+qk~μ,k~k~+Egroup(qk~μ)\tilde{q} \leftarrow \tilde{q} + \tilde{qk}_\mu, \qquad \tilde{k} \leftarrow \tilde{k} + E_{\text{group}}(\tilde{qk}_\mu)

Here, BgroupB_{\text{group}} and EgroupE_{\text{group}} denote group-wise broadcasting and averaging.

  1. Value Shift: For values, CCGQA concatenates two projections—one computed from the current token and one from the previous token:

v~t=W~Vxt,vˉt1=W~V^xt1\tilde{v}_t = \tilde{W}_V x_t, \qquad \bar{v}_{t-1} = \tilde{W}_{\widehat{V}} x_{t-1}

v~=concat(v~t,vˉt1)\tilde{v} = \text{concat}(\tilde{v}_t, \bar{v}_{t-1})

  1. Normalization and Positional Encoding: Queries and keys are then L2-normalized and scaled by dh\sqrt{d_h}, with RoPE positional embeddings incorporated within the compressed space.
  2. Latent-Space Attention Computation:

o~h=v~hsoftmax(1dq~hk~hT)\tilde{o}_h = \tilde{v}_h \cdot \text{softmax}\left(\frac{1}{\sqrt{d}} \tilde{q}_h \tilde{k}_h^T \right)

An up-projection W~O\tilde{W}_O maps the output back to the full embedding dimension.

Compression Rate Flexibility: Separate factors C1C_1 and C2C_2 allow independent control over query and key/value compression, letting practitioners balance compute and memory demands.

3. Empirical Results and Performance Metrics

CCGQA exhibits multiple empirical advantages:

  • KV-cache Reduction: MoE models with CCGQA yield up to 8×8\times reduction in KV-cache versus standard MHA, with matched or superior downstream task quality.
  • FLOP Reduction: Compute costs, specifically for QKTQK^T and value application, decrease approximately by $1/C$; in dense models, CCGQA achieves roughly 4×4\times lower FLOPs at similar quality benchmarks (Figliolia et al., 6 Oct 2025).
  • Hardware Acceleration: On H100 GPUs (BF16, E=2048E=2048), the fused CCA/CCGQA kernel yields prefill latency savings of 1.7×1.7\times at sequence length $16$k and backward speedup of 1.3×1.3\times compared to MHA.
  • Model Quality Preservation: On perplexity and evaluation benchmarks such as HellaSwag, ARC, and Winogrande, CCGQA matches or exceeds the quality of GQA and MLA at the same parameter and cache budget.
Method KV-Cache Compression Compute Savings Latent-Space Use RoPE Compatibility Head Grouping
MHA None None No Yes None
GQA Yes None No Yes Yes
MLA Yes Marginal Yes Complicated None
CCA Yes Yes Yes Yes None
CCGQA Yes (multi) Yes (multi) Yes Yes Yes

CCGQA achieves a dual compression—both latent-space and head-wise—while supporting robust positional encoding and head grouping. MLA compresses KV-cache but requires extra up-projection FLOPs and more intricate RoPE handling. GQA simplifies cache by sharing heads but keeps FLOPs unchanged. CCGQA’s flexible decoupling of query and key/value compression rates offers a more tractable Pareto frontier for real-world deployment constraints.

5. Implementation Considerations and Practical Applications

  • Kernel Fusion: Efficient kernel implementation is essential. The combination of down-projection, convolution, residual combination, and value-shift require aggressive kernel fusion, as the full benefit of CCGQA emerges only when these operations are performed in a single pass.
  • Parameter Budget and Scaling: CCGQA allows practitioners to scale context windows and model depth without suffering prohibitive cost, as both compute and memory can be independently tuned. The design fits well with tensor parallelism and distributed memory schemes.
  • Integration with Mixture-of-Experts (MoE): When applied to MoE models, CCGQA’s latent-space compression amplifies the throughput gains from expert routing, particularly as KV-cache size is often the bottleneck for attention in fast decoding scenarios.
  • Long-Context Inference: CCGQA is particularly well suited for large batch, long-context serving, enabling efficient autoregressive generation in chatbots or document understanding without compromising response speed or accuracy.

6. Limitations and Future Directions

  • Additional Operation Complexity: The convolutional mixing, q–k mean, and value-shift introduce heightened architectural complexity compared to vanilla attention. Ensuring minimal overhead requires careful implementation.
  • Sequence-Dimension Compression: While CCGQA compresses along the hidden/cache dimension, it does not directly address the quadratic scaling with sequence length. Combining CCGQA with sequence-level sparsification or compression may further improve efficiency.
  • Expressivity in Low-Dimensional Latent Space: Research is needed to explore richer nonlinear mixing or additional latent operations to guard against representational collapse as compression rates increase.
  • Hybrid Approaches: Potential integration of SQA-like query head reduction (Filipek, 2 Oct 2025) or dynamic head grouping may further expand the quality–efficiency trade-off envelope.

7. Implications for Model Architecture and Hardware Deployment

CCGQA’s decoupled compression and latent-space operation are especially impactful in hardware-constrained environments and multi-GPU clusters. The ability to independently adjust compute and memory intensity supports finer-grained resource matching, allowing for unique scaling trade-offs in both research and production. Additionally, the method’s native compatibility with positional encoding (RoPE) and support for Mixture-of-Experts architectures yield practical benefits for emerging long-context LLMs (Figliolia et al., 6 Oct 2025).


CCGQA represents a convergence of convolutional latent-space compression and head grouping, constituting an efficient and tunable attention mechanism for modern transformers. Its principled design offers substantial reductions in compute and memory overheads and achieves empirically validated improvements in prefill and training latency, without measurable loss in generative or reasoning quality. Its architecture supports scalable deployment and ongoing incorporation of new efficiency-driven innovations.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Compressed Convolutional Grouped Query Attention (CCGQA).