Papers
Topics
Authors
Recent
2000 character limit reached

Jasper Token Compression 600M: Efficient Transformer

Updated 19 November 2025
  • Jasper-Token-Compression-600M is a bilingual transformer embedding model that integrates a fully differentiable token compression block for efficient sequence reduction.
  • It employs a two-layer SwiGLU-activated MLP and adaptive average pooling to achieve significant inference speedups, nearly matching an 8B teacher model’s performance.
  • The model combines knowledge distillation and contrastive learning in a four-stage training pipeline, balancing compression ratios with embedding fidelity.

Jasper-Token-Compression-600M is an open-source bilingual (English and Chinese) transformer embedding model that introduces a fully differentiable, one-dimensional convolution-based token compression block for efficient sequence reduction. Developed as an extension of the Stella and Jasper distillation-based paradigms, this architecture leverages both knowledge distillation and contrastive learning to achieve high-quality embeddings with significant inference acceleration compared to conventional dense transformer models of similar parameter count (Zhang et al., 18 Nov 2025).

1. Model Overview and Motivation

Jasper-Token-Compression-600M is designed to address the limitations of high memory and compute overhead associated with deep transformer models processing long sequences. The core innovation is a token-compression module that reduces the sequence length prior to self-attention, yielding faster and memory-efficient processing without substantial loss of embedding fidelity. The approach is motivated by the need for practical runtime efficiency, enabling a base 600M-parameter model to approach the performance of a full 8B-parameter teacher while offering substantial speed gains (Zhang et al., 18 Nov 2025). This is achieved by building on sequence-level convolutional compression concepts previously validated for deletion-based sentence compression (Hou et al., 2020).

2. Token Compression Block: Architecture and Operation

The Jasper-Token-Compression-600M architecture inserts a token-compression module between the word-piece embedding layer and transformer encoder blocks:

  1. Input: The model receives a sequence XRL×dX \in \mathbb{R}^{L \times d}, where LL is the (possibly long) input length and d=1024d=1024 the embedding width.
  2. Qwen3MLP Layer: A two-layer SwiGLU-activated feedforward network is applied:
    • First linear mapping: d4dd \rightarrow 4d, with SwiGLU and dropout 0.1
    • Second linear mapping: 4dd4d \rightarrow d
    • The result HRL×dH \in \mathbb{R}^{L \times d}
  3. AdaptiveAvgPool1d: A parameter-free 1D average-pooling layer reduces HH along the length dimension to the target LL', yielding YRL×dY \in \mathbb{R}^{L' \times d}. The pooling kernel size and stride are dynamically chosen such that

L=L+2pks+1L' = \left\lfloor \frac{L + 2p - k}{s} \right\rfloor + 1

where p=0p=0 (no padding).

  1. Transformer Stack: The compressed sequence YY is then processed by the standard Qwen3 transformer blocks (attention and FFN modules, now at length LL').

The whole module is end-to-end differentiable, with only the MLP containing trainable parameters, and no additional masking or complicated memory management required (Zhang et al., 18 Nov 2025).

3. Dynamic Compression Scheduling

The input length after compression, LL', is determined by two hyperparameters:

  • Compression ratio ρ(0,1]\rho \in (0, 1]: controls the aggressiveness of pooling.
  • Threshold Lth=80L_{\mathrm{th}} = 80: for short sequences (LLthL \leq L_{\mathrm{th}}), no compression is applied (L=LL' = L); otherwise,

L=Lth+(LLth)ρL' = L_{\mathrm{th}} + \lfloor (L - L_{\mathrm{th}}) \cdot \rho \rfloor

During training, ρ\rho is dynamically sampled per batch according to the following schedule:

  • With probability 0.1, ρUniform(0.1,0.33)\rho \sim \text{Uniform}(0.1, 0.33)
  • With probability 0.4, ρ=0.3333\rho = 0.3333
  • With probability 0.3, ρUniform(0.33,0.66)\rho \sim \text{Uniform}(0.33, 0.66)
  • With probability 0.2, ρUniform(0.66,1.0)\rho \sim \text{Uniform}(0.66, 1.0)

This exposes the network to a spectrum of compression rates, fostering robustness to variable-length sequences and variable compression (Zhang et al., 18 Nov 2025).

4. Integration with Distillation and Contrastive Training

The token-compression block is embedded within a four-stage training pipeline:

  1. Stage 1: Plain knowledge distillation (KD), cosine loss only:

Lcos=1EsEtL_\text{cos} = 1 - E_s \cdot E_t

  1. Stage 2: Fixed-ratio compression with KD, MLP weights updated, pooling static.
  2. Stage 3: Dynamic compression with structure-preserving distillation; adds pairwise similarity loss:

Lsim=MSE(BEsBEs,BEtBEt)L_\text{sim} = \mathrm{MSE}(B E_s B E_s^\top, B E_t B E_t^\top)

Total loss: Ls3=10Lcos+100LsimL_{s3} = 10 L_\text{cos} + 100 L_\text{sim}

  1. Stage 4: Contrastive retrieval fine-tuning with InfoNCE and soft KL-distillation:

Lcl=1Nilogexp(s(qi,di+)/τ)ZiL_\text{cl} = -\frac{1}{N} \sum_i \log \frac{\exp(s(q_i, d_i^+)/\tau)}{Z_i}

Lsoft=DKL(softmax(S(s)/α)softmax(S(t)/α))L_\text{soft} = D_\text{KL}(\text{softmax}(S^{(s)}/\alpha) \parallel \text{softmax}(S^{(t)}/\alpha))

Ls4=Lcl+16LsoftL_{s4} = L_\text{cl} + 16 L_\text{soft}

All losses are backpropagated through the token-compression block with standard gradient flow. The average pooling is parameter-free; all learnable parameters reside in the MLP.

5. Performance Evaluation and Trade-offs

Jasper-Token-Compression-600M achieves strong performance and efficiency trade-offs, as measured on the Massive Text Embedding Benchmark (MTEB):

Model MTEB (en) MTEB (zh) Inference time (1K tokens)
Vanilla 0.6B 70.70 66.33 24.24 ms
Jasper-TC-600M (ρ = 0.5) 74.75 73.51 13.11 ms (≈46% faster)
Jasper-TC-600M (ρ = 0.33) 74.58 N/A 9.38 ms (×2.6 speed)
Jasper-TC-600M (ρ = 0.2) 74.21 N/A 6.56 ms (×3.7 speed)
Jasper-TC-600M (ρ = 0.1) 4.48 ms (×5.4 speed)
8B teacher 75.22 73.84

Reducing ρ\rho results in minimal drops in Mean(Task) while offering linear-to-superlinear throughput gains. At default ρ=0.5\rho=0.5, the 600M model nearly matches the 8B teacher's quality at double the inference speed, and can go up to 5×5\times speed improvements with only minor metric loss (Zhang et al., 18 Nov 2025).

Prior approaches to token compression in NLP tasks—particularly deletion-based models—leveraged 1D convolutional encoder-decoder networks, employing U-Net style architectures with skip connections for retaining fine-grained token information (Hou et al., 2020). However, these models focused primarily on sentence-level binary masking for deletion, producing a retained/deleted mask per token using a block of stacked 1D convolutions, max-pooling, and upsampling layers.

By contrast, Jasper-Token-Compression-600M applies a learnable MLP followed by a non-parametric adaptive pooling operation, reducing the entire sequence length prior to standard transformer processing and enabling highly efficient memory and compute profiles. A plausible implication is that this strategy avoids the masking and alignment complications found in probabilistic or hard-selection compression, while maintaining end-to-end differentiability and interpretability.

7. Significance and Potential Impacts

Jasper-Token-Compression-600M establishes that deep transformer models can incorporate trainable, fully differentiable compression modules to flexibly trade sequence length for efficiency, without catastrophic loss in embedding performance. Its approach integrates seamlessly into distillation and contrastive learning frameworks, offering practical deployment options for large-scale retrieval, embedding, and multilingual tasks. The methodology further generalizes the utility of convolution-inspired sequence reduction in modern transformer pipelines beyond application-specific sentence compression (Zhang et al., 18 Nov 2025, Hou et al., 2020).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Jasper-Token-Compression-600M Model.