Papers
Topics
Authors
Recent
2000 character limit reached

Hierarchical Token Compression

Updated 5 December 2025
  • Hierarchical token compression is a method that adaptively reduces token sequences by leveraging multi-level abstractions and semantic hierarchies.
  • It employs stagewise pruning, merging, and summary token insertion to balance computational efficiency with minimal accuracy loss in diverse models.
  • Empirical results highlight significant reductions in FLOPs, token counts, and latency, while maintaining or enhancing performance on downstream tasks.

Hierarchical token compression refers to a class of techniques for adaptive, multi-stage reduction of sequence length or embedding dimensionality in tokenized representations—across vision, language, and multimodal transformers—by leveraging structural or semantic hierarchies in the data. Unlike flat, fixed granularity compression, hierarchical approaches operate at multiple abstraction layers and adaptively preserve salient information, optimizing both computational efficiency and downstream task accuracy across a spectrum of model architectures.

1. Core Principles and Definition

Hierarchical token compression exploits the observation that token sequences—whether visual, textual, or multimodal—are highly redundant and internally structured. By applying stagewise, content-aware compression and/or merging, these methods dynamically collapse low-importance tokens and construct multi-level abstractions while explicitly preserving semantic or spatial locality and key information channels.

Key elements in hierarchical schemes include:

  • Multi-stage or layer-wise compression, where each stage (or model layer) applies its own pruning, merging, or summarization operations, as opposed to a single global reduction.
  • Residual or compensatory paths, which ensure that pruned tokens can be reconstructed or bypassed as needed, mitigating over-squashing of salient information.
  • Dynamic scoring, typically informed by gradient, attention, or semantics-derived importance measures, to adaptively modulate compression granularity or retention.

Hierarchical token compression has been applied to vision transformers (Mao et al., 30 Mar 2025), video-LMMs (Wang et al., 30 Nov 2025, Wang et al., 3 Jun 2025), point cloud models (Zhang et al., 13 Nov 2025), LLM tokenization and embedding (Geng et al., 1 Jun 2025, Dolga et al., 17 Oct 2025, V et al., 22 Sep 2025, Neitemeier et al., 17 Jan 2025), and chain-of-thought reasoning (Wang et al., 22 May 2025), among other contexts.

2. Methodological Taxonomy

The following major design paradigms have emerged:

A. Layer-Wise Prune-and-Merge for Vision Transformers

The Prune & Merge (PM-ViT) framework inserts a module at each ViT block that first prunes tokens using a gradient-weighted attention importance score

Si=1HhjLAijoAijoS_i = \left| \frac{1}{H\sum_{h}\sum_{j}} \frac{\partial L}{\partial A_{ij}^o} \cdot A_{ij}^o \right|

and then merges contiguous tokens in low-importance regions via a trainable sparse merge matrix MM. All pruned tokens are preserved via a complementary mask and can be reconstructed at the block output using the Moore–Penrose pseudoinverse RR of MM, plus shortcut connections, enabling lossless, layer-adaptive compression with negligible inference overhead (Mao et al., 30 Mar 2025). Compression is hierarchical by design, with early layers typically pruned more lightly than late ones.

B. Streaming and Multi-Stage Event-Based Pruning for VideoML

Hierarchical token compression for long video understanding employs multi-phase token elimination:

  • Stage 1: Structural event segmentation, where frame or patch tokens are grouped according to event similarity scores (often via cross-modal attention or cosine similarity).
  • Stage 2: Hierarchical token pruning, which applies progressive, layer-by-layer reduction using semantic alignment and attention-based saliency, often retaining more tokens in “key” events versus “non-key” regions, following a schedule parameterized by distinct retention ratios at various model depths.
  • Stage 3: Decoding/KV-cache optimization, removing visual tokens from cache memory entirely in later decoding stages to further reduce footprint (Wang et al., 3 Jun 2025, Wang et al., 30 Nov 2025).

This flow yields multiplicative reductions in compute and memory, while maintaining or even improving accuracy on long video QA and captioning benchmarks.

C. Hierarchical Tokenization and Embedding for LLMs

Several approaches encode hierarchy in the vocabulary and embedding stage:

  • zip2zip (Geng et al., 1 Jun 2025) dynamically builds a hypertoken hierarchy via streaming Lempel-Ziv-Welch (LZW) compression on base tokens at inference time, constructing variable-length composite tokens and deriving their embeddings on the fly via a learned “hyper-encoder.”
  • Hierarchical BPE (Dolga et al., 17 Oct 2025) applies an explicit end-of-patch marker to BPE tokens and then runs a secondary BPE merge up to a user-chosen maximum patch size SS, allowing for language-agnostic, adjustable patch granularity with significant vocabulary and compute savings.
  • Hierarchical autoregressive transformers (Neitemeier et al., 17 Jan 2025) use a small character-level encoder to compress each word into a dense embedding, which is then modeled in sequence by a word-level backbone, before being decoded back to bytes; this design adapts seamlessly to new domains or languages and yields robust, tokenizer-free models.
  • Aggregate Semantic Grouping (ASG) (V et al., 22 Sep 2025) employs product quantization of word embeddings, decomposing each vector into KK subspace blocks and assigning to a small shared codebook in each block, achieving extreme model size reduction with little downstream degradation.

D. Hierarchical Summarization and Attention Rewiring

Hierarchical Token Prepending (HTP) improves information flow in LLM embeddings by partitioning long sequences into blocks, prepending local and global summary tokens, and employing block-level mean-pooling to counteract over-squashing. This gives multiple, explicit “highways” for backward information propagation and linearizes the impact of input tokens in representation (Ding et al., 18 Nov 2025).

E. Chunked Compression for Long Chain-of-Thought

In chain-of-thought reasoning, two-stage chunk-level compression involves splitting sequences by heuristics (e.g., double newlines and minimum length), compressing each chunk into several candidate summaries using an LLM, and then beam-searching through candidates to maximize fluency and overall compression subject to a task accuracy constraint (Wang et al., 22 May 2025).

3. Mathematical and Algorithmic Formulations

Examples of key formulations and workflows:

Domain Compression Criterion Hierarchical Operation
Vision Si=(1/(Hhj))L/AijAijS_i = | (1/(H\sum_{h}\sum_{j})) \partial L/\partial A_{ij} \cdot A_{ij} | Token pruning and contiguous-region merging per layer (Mao et al., 30 Mar 2025)
Video Svi,t=vitvitS_{v_i,t} = \frac{v_i \cdot t}{\|v_i\| \|t\|}; multi-stage retention Rkey(l),Rnonkey(l)R_{\mathrm{key}}(l), R_{\mathrm{nonkey}}(l) per Fig. 7 Event segmentation \to per-layer selective retention \to cache-pruning (Wang et al., 3 Jun 2025, Wang et al., 30 Nov 2025)
LLM Hypertokens LZW streaming on BPE: create/merge tokens up to length MM Dynamic hypertoken codebook, run-time “hyper-encoding” for unseen tokens (Geng et al., 1 Jun 2025)
PQ Embedding et=[et(1);;et(K)];e^t=k=1KCk[qk(t)]e_t = [e_t^{(1)}; \ldots; e_t^{(K)}]; \hat{e}_t = \sum_{k=1}^K C_k[q_k(t)] Product quantization: blockwise codebook lookup, hierarchical residuals (optional) (V et al., 22 Sep 2025)
HTP Embeddings Blockwise summary tokens at each layer; mean-pooling Jacobian vs. last-token pooling Layerwise hidden-state rewiring, block/global prepending (Ding et al., 18 Nov 2025)

4. Compression-Accuracy Trade-offs and Empirical Findings

Hierarchical token compression methods demonstrate superior efficiency-accuracy Pareto frontiers relative to flat, non-hierarchical baselines. Empirical findings across domains include:

  • On vision tasks (ImageNet-1k, ADE20K, DeiT/Segmenter models), PM-ViT achieves up to 1.64×1.64\times speedup and 41.3%41.3\% FLOPs reduction with <0.2%<0.2\% accuracy drop (Mao et al., 30 Mar 2025).
  • In video modeling, STC combined with token pruning (ReKV framework) yields 24.5%24.5\% ViT and 45.3%45.3\% LLM prefill latency reduction at 99%99\% test accuracy (Wang et al., 30 Nov 2025), while METok delivers 72.3%72.3\% FLOPs and KV-cache memory reduction with no accuracy loss (Wang et al., 3 Jun 2025).
  • zip2zip compresses LLM input/output sequence length by $20$–60%60\%, with up to 46.6%46.6\% throughput improvement and only a modest perplexity cost (Geng et al., 1 Jun 2025).
  • PQ-based ASG retains $95$–100%100\% downstream accuracy at $0.4$–0.5%0.5\% of original embedding parameters, outperforming prior semantic grouping (V et al., 22 Sep 2025).
  • Hierarchical BPE delivers bits-per-byte and accuracy gains over standard BPE and space- or entropy-based dynamic patching, with improved scaling and no language dependency (Dolga et al., 17 Oct 2025).
  • HCC-3D compresses 3D point clouds by 98%\sim98\% token reduction, preserving SOTA performance in classification and captioning (Zhang et al., 13 Nov 2025).

Characteristic trade-offs include: 1) a “knee” in the efficiency–accuracy curve at moderate compression ratios, 2) small but measurable degradation under extreme compression (especially if token merging or pruning is aggressive in semantically dense regions), and 3) the need for architecture- and domain-specific tuning of hierarchy depth, retention ratios, or chunk/patch size.

5. Applications Across Modalities and Architectures

Hierarchical token compression is now foundational for efficient scaling in:

  • Vision Transformers: Layer-wise prune-and-merge has enabled real-time or resource-constrained deployment without compromising spatial detail required for dense prediction tasks (Mao et al., 30 Mar 2025).
  • Video LLMs (Video-LLMs): Stagewise event segmentation and hierarchical pruning are essential for scaling to long or streaming video understanding with large backbone LLMs (Wang et al., 3 Jun 2025, Wang et al., 30 Nov 2025).
  • Vision-LLMs: Compensatory hierarchical compression with dual-stage GSC+ADM modules enables nearly 98%98\% token reduction for point cloud input, drastically lowering compute/memory yet preserving accuracy (Zhang et al., 13 Nov 2025).
  • LLM Tokenization and Adaptation: On-the-fly hypertokenization (LZW, hierarchical BPE) and compositional embedding schemes drive improvements in inference cost, compactness, and multilingual/robust performance (Geng et al., 1 Jun 2025, Dolga et al., 17 Oct 2025, V et al., 22 Sep 2025, Neitemeier et al., 17 Jan 2025).
  • Reasoning and Retrieval: Hierarchical prepending and chunk-based compression advance the state of the art in long-context embedding, chain-of-thought distillation, and information retrieval (Ding et al., 18 Nov 2025, Wang et al., 22 May 2025).

6. Limitations, Open Challenges, and Future Directions

While hierarchical token compression demonstrates powerful efficiency gains and robustness, several limitations remain:

  • Algorithmic complexity: Many approaches require nontrivial, dataset- or task-specific hyperparameter tuning (e.g., number of global/detail queries, patch budgets, retention ratios).
  • Overcompression risk: Excessive hierarchical compression can lead to information bottlenecks, diminishing quality in subtle or information-dense domains (noted, e.g., for arithmetic or specialized token spans in zip2zip (Geng et al., 1 Jun 2025)).
  • Generalization: Practices such as dynamic chunking or block-scheduling require adaptation for different tasks (e.g., transition from short-form to long-form, or from single-object to large-scale multi-object in 3D scenes (Zhang et al., 13 Nov 2025)).
  • Codebook reuse and cache management: Dynamic codebook construction (zip2zip) may create large numbers of "dead" hypertokens, with low reuse rates and suboptimal caching, suggesting the need for adaptive pruning or codebook reinitialization (Geng et al., 1 Jun 2025).
  • Finite context limitations: It remains an open problem to scale these methods without loss of coherence or compositionality in very long or streaming scenarios.

Ongoing research focuses on integration with extreme quantization, hybrid memory hierarchies, learned or adaptive patching/grouping, and principled approaches to dynamic codebook management, with the aim of further approaching the theoretical limits of redundancy reduction in structured sequence modeling.

7. Comparative Summary of Representative Methods

Method/Paper Modality Hierarchical Mechanism Compression Ratio Performance Impact
PM-ViT (Mao et al., 30 Mar 2025) Vision Layer-wise prune+merge w/ mask 41–50% tokens pruned <0.2%<0.2\% top-1 drop
METok (Wang et al., 3 Jun 2025) Video 3-stage: event, hierarchical prune 72% FLOPs, 92.5%92.5\% KV saved None (maintains SOTA)
HCC-3D (Zhang et al., 13 Nov 2025) 3D (VLM) Global+detail hierarchical comp. >>98% tokens pruned SOTA retained
zip2zip (Geng et al., 1 Jun 2025) LM (text) On-the-fly LZW, hypertoken encoder 20–60% token count reduced \sim7% perplexity↑
Hierarchical BPE (Dolga et al., 17 Oct 2025) LM (text) 2-level BPE+EOP, patch encoder Fertility \sim1.5–3.6 Best BPB, retains QA
ASG (V et al., 22 Sep 2025) LM embedding PQ codebook, blockwise assignment $0.4$–0.5%0.5\% emb. size >95%>95\% task F1/AP
HTP (Ding et al., 18 Nov 2025) LLM retrieval Blockwise summary tokens, mean pool O(2M/n)O(2M/n) overhead +1–3 NDCG points

Across all settings, hierarchical token compression has empirically shifted the viability frontier for large sequence models, enabling orders-of-magnitude savings in compute and memory without sacrificing the fine-structure of semantic, spatial, or temporal information flow essential for state-of-the-art performance.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Hierarchical Token Compression.