Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 33 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 74 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 362 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

CompLLM: Efficient Context Compression

Updated 24 September 2025
  • CompLLM is a context compression method that segments long inputs into smaller blocks, converting tokens into Concept Embeddings for efficient processing.
  • It applies a LoRA-based neural compressor per segment to reduce computational complexity from quadratic to linear, enhancing speed and cutting memory usage.
  • The method enables caching for overlapping contexts while maintaining high accuracy in long-context Q&A, making it ideal for real-world retrieval-augmented tasks.

CompLLM is a context compression method designed to address the computational bottlenecks inherent in LLMs when processing long input sequences. Traditional self-attention in LLMs scales quadratically with context length, rendering direct inference on extremely long contexts computationally expensive or altogether infeasible. CompLLM overcomes this by segment-wise soft compression, yielding a linear scaling approach that preserves performance while substantially reducing both latency and memory requirements for long-context question answering.

1. Motivation and Architectural Overview

The computational challenge with long-context LLM inference derives from the self-attention operation, which has O(N2)O(N^2) time and memory complexity for an input of NN tokens. While prior soft compression schemes aim to distill input into a more compact latent representation, they typically operate on the context as a single block. This “holistic” approach not only inherits quadratic complexity in the compression step but also precludes reusing computation when queries share overlapping context.

CompLLM divides an input context of NN tokens into contiguous segments of SS tokens (with SS typically set to 20). Each segment is independently compressed into S/CS/C “Concept Embeddings” using a lightweight neural compressor module (a Low-Rank Adaptation (LoRA) extension and a linear projection). This per-segment design enables linear scaling in overall computational complexity, allows compressed segments to be cached for reuse across queries, and supports generalization of models trained solely on moderately short sequences to contexts of length $100$k and beyond.

2. Technical Methodology

Given input tokens segmented into N/SN/S non-overlapping blocks of length SS, each block is compressed via an adapted neural module on top of the base LLM. The computational cost per block (compression) is O(S2)O(S^2), leveraging internal self-attention. For all blocks, total cost is:

O(NSS2)=O(NS)O\left(\frac{N}{S} \cdot S^2\right) = O(N \cdot S)

which is linear in NN for constant SS.

Compression per block involves projecting SS Token Embeddings into S/CS/C Concept Embeddings using the LoRA-adapted compressor and a final dense layer. Importantly, these Concept Embeddings live in the same latent space as the Token Embeddings, allowing direct interchangeability and eliminating the need for further model fine-tuning.

The compressor is trained by distilling hidden activations from standard (uncompressed) inference. For a set of answer token indices AA and model depth ll, hidden states HA(l)H_A^{(l)} from the teacher (full context) and H^A(l)\hat{H}_A^{(l)} from the student (compressed) context are compared. The per-layer loss is normalized by the standard deviation of the teacher activations:

Llayer(l)(c,x)=1σ(l)(c,x)1AdtAAj=1dSmoothL1β(H^t,j(l),Ht,j(l))L_{\text{layer}}^{(l)}(c, x) = \frac{1}{\sigma^{(l)}(c,x)} \frac{1}{|A| \cdot d} \sum_{t \in A}^{|A|} \sum_{j=1}^d \mathrm{SmoothL1}_\beta\left(\hat{H}_{t,j}^{(l)}, H_{t,j}^{(l)}\right)

with

σ(l)(c,x)=Std(HA(l))\sigma^{(l)}(c,x) = \mathrm{Std}\left(H_A^{(l)}\right)

and

SmoothL1β(u,v)={12(uv)2/β,if uv<β uvβ2,otherwise\mathrm{SmoothL1}_\beta(u, v) = \begin{cases} \frac{1}{2}(u-v)^2/\beta, & \text{if } |u-v| < \beta\ |u-v| - \frac{\beta}{2}, & \text{otherwise} \end{cases}

This objective enforces local correspondence between compressed and uncompressed representations, preserving essential information for downstream reasoning.

3. Key Properties: Efficiency, Scalability, and Reusability

Efficiency: Segment-wise compression restricts the self-attention window to each SS-length segment, yielding O(N)O(N) complexity across N/SN/S segments and dramatically accelerating inference, especially during context prefill and early generation (Time To First Token, TTFT).

Scalability: By limiting training to short segments, models can process at inference time sequences of $100$k or more tokens, generalizing well in long-context regimes without retraining for large NN. This factor is enabled by the local, context-agnostic compressor's independence across segments.

Reusability: As each segment is compressed independently, its Concept Embedding can be cached. In retrieval-augmented scenarios or code assistants working across overlapping contexts, this eliminates the need to re-compute segment compressions for shared sub-contexts, yielding further efficiency gains in batch and interactive workloads.

4. Experimental Results and Comparative Performance

With a typical compression rate C=2C=2, CompLLM achieves the following for long contexts:

  • Speedup in TTFT: Up to 4×4\times compared to uncompressed inference, owing to the reduced O(N2)O(N2/C2)O(N^2) \mapsto O(N^2/C^2) scaling of the key-value (KV) cache prefill.
  • KV Cache Reduction: 50%50\% decrease due to half the number of embeddings stored per context window.
  • Accuracy: Matching or exceeding the performance of the original LLM on long-context Q&A, with improvement observed for very long contexts. The compressed model maintains fidelity in answer generation due to careful alignment of hidden representations during training.

When compared with established methods such as LLMLingua-2 (which also utilizes sentence-level compression), CompLLM yields better or comparable accuracy at moderate lengths (<50<50k tokens), and its efficiency and caching features provide practical advantages at scale.

5. Practical Applications

The design of CompLLM enables efficient deployment in a variety of real-world LLM workloads:

  • Retrieval-Augmented Generation: Pre-cached compressed representations of documents enable rapid multi-document retrieval and aggregation without repeated computation.
  • Massive Contextual Search/QA: Legal document analysis, codebase exploration, and any scenario relying on extremely long source materials benefit from both the reduced computational cost and maintained context fidelity.
  • Production LLM Systems: By integrating segment-wise compression and caching, existing LLM architectures can serve longer context windows on limited hardware, broadening their deployment viability.

6. Limitations and Future Directions

Current compression rates (e.g., C=2C=2) evince a trade-off between speed, memory, and information retention. Extreme compression or small segment lengths could degrade the ability to preserve inter-segment dependencies, particularly if strong cross-segment context is required. While the current approach uses a LoRA-based compressor with a simple linear output, future work may examine non-linear or cross-segment-aware compressors to further improve scalability without loss. The cache management policy and integration into complex pipeline systems remain open engineering challenges for maximizing CompLLM’s practical utility.


CompLLM introduces an efficient, scalable, and reusable framework for context compression in long-context LLM Q&A (Berton et al., 23 Sep 2025). By employing segment-wise independent compression with straightforward training objectives, it enables substantial improvements in both computational efficiency and maximum context length, suggesting a clear direction for practical, production-grade deployment of LLMs on tasks that demand high-fidelity processing of long unstructured documents.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to CompLLM.