Papers
Topics
Authors
Recent
Search
2000 character limit reached

FAST: Frequency-Space Action Sequence Tokenization

Updated 29 January 2026
  • FAST is a compression-based action tokenization method that uses DCT, quantization, and BPE to transform continuous robotic trajectories into compact, invertible tokens.
  • It employs a column-first flattening approach and quantile normalization to maintain spatio-temporal coherence and improve autoregressive performance in Transformer models.
  • FAST⁺ generalizes across diverse robot morphologies, leading to efficient policy learning and reductions in sample complexity and compute in high-frequency, dexterous tasks.

Frequency-Space Action Sequence Tokenization (FAST) is a compression-based action discretization method designed for effective integration with Transformer-based vision-language action (VLA) models. FAST addresses fundamental limitations of conventional per-dimension, per-timestep binning, particularly in representing high-frequency and dexterous robot action trajectories. By leveraging the discrete cosine transform (DCT) and byte-pair encoding (BPE), followed by quantization and normalization, FAST produces compact, invertible, and tunable discrete token streams from continuous robot control signals, significantly reducing autoregressive horizon and sample complexity. The methodology facilitates efficient policy learning in vision-language-action models, supporting both generalist and specialized policies operating over diverse robotic morphologies and control rates (Pertsch et al., 16 Jan 2025).

1. Discrete Cosine Transform Formulation

FAST utilizes the type-II discrete cosine transform (DCT-II) for encoding each action channel into the frequency domain. For a length-NN action sequence x0,,xN1x_0, \ldots, x_{N-1}, the transformation is given by:

Xk=n=0N1xncos[πN(n+12)k]for k=0,,N1X_k = \sum_{n=0}^{N-1} x_n \cdot \cos\left[\frac{\pi}{N} (n + \frac{1}{2}) k \right] \quad \text{for } k = 0, \ldots, N-1

Invertibility is maintained via the type-III DCT (DCT-III):

xn=1NX0+2Nk=1N1Xkcos[πNk(n+12)]x_n = \frac{1}{N} X_0 + \frac{2}{N} \sum_{k=1}^{N-1} X_k \cdot \cos\left[\frac{\pi}{N} k(n + \frac{1}{2})\right]

This procedure is applied independently to each of the AA action dimensions over temporal windows ("chunks") of HH steps, with HH typically corresponding to one second of control signal sampled at the robot's frequency. This frequency-space conversion is pivotal for reducing redundancy in highly smooth or periodic action sequences, especially at high sampling rates.

2. Quantization, Discretization, and Compression

After DCT transformation, FAST processes the resulting A×HA \times H coefficient matrix via the following workflow:

  1. Quantile Normalization: Each action dimension's coefficients are scaled so the 1st and 99th percentiles map to [1,1][-1, 1], standardizing across diverse action scales.
  2. Scalar Quantization: All coefficients are multiplied by a scalar hyperparameter γ\gamma (default γ=10\gamma = 10) and rounded to integers:

Cˉji=round(γCji),CˉjiZ\bar{C}^{i}_{j} = \text{round}(\gamma \cdot C^{i}_{j}), \qquad \bar{C}^{i}_{j} \in \mathbb{Z}

γ\gamma governs the fidelity–compression trade-off: higher γ\gamma yields finer quantization and more tokens, lower γ\gamma induces coarser approximation and shorter sequences.

  1. Column-First Flattening: The quantized coefficient matrix is linearized such that all low-frequency coefficients across dimensions are sequenced before higher-frequency terms. This empirically improves Transformers’ autoregressive rollout stability.
  2. Byte-Pair Encoding: A BPE tokenizer (typical vocabulary size V=1024V=1024) is fit to these integer sequences, merging repeated zeros and frequent patterns. BPE compression typically reduces sequence length by $5$–15×15\times versus naive binning, yielding a final discrete token stream T1,,TLT_1,\ldots,T_L with LAHL \ll A \cdot H.

The combination of DCT-domain sparsity, quantization control, and BPE achieves significant rate–distortion performance improvements without reliance on neural tokenizers. Empirically, γ=10\gamma=10 and V=1024V=1024 lead to sub-millimeter action RMSE across diverse tasks.

3. The FAST⁺ Universal Tokenizer

FAST⁺ extends FAST by providing a pretrained, architecture-agnostic universal action tokenizer:

  • Training Data: 1,000,000 one-second action sequences from a mixture of single-arm, bi-manual, mobile, joint, and end-effector robots at rates from 5 to 50 Hz.
  • Objective: BPE is trained on quantized DCT-flattened coefficients; no networks beyond the BPE merge table are learned.
  • Generalization: FAST⁺ applies to novel robot morphologies and frequencies, achieving $2$–5×5\times token count reduction over naive binning with no loss in reconstruction accuracy.
  • Deployment: Exposed through HuggingFace AutoProcessor API, enabling black-box application in minimal code.

Without retraining, FAST⁺ compresses unseen action streams efficiently, supporting its designation as a universal robot action tokenizer.

4. Integration with Autoregressive Vision-Language-Action Models

FAST integrates seamlessly with Transformer-based VLAs—such as π₀, PaliGemma-3B, and Prismatic-7B—by substituting the least used tokens in the model’s vocabulary with FAST BPE tokens. The input sequence at training and inference comprises:

  • [image tokens]
  • [language instruction tokens]
  • [proprioceptive tokens]
  • [action tokens to be predicted (FAST tokens)]

Standard 1D positional encodings are used throughout the sequence, with FAST tokens occupying a contiguous tail region; no 2D encodings are added for time/frequency. The models employ standard next-token prediction with cross-entropy loss, and DCT inversion is performed offline after decoding. No auxiliary regression heads or additional objective terms are needed (Pertsch et al., 16 Jan 2025).

5. Empirical Performance and Policy Learning Outcomes

FAST achieves substantial token compression and improved policy learning efficiency, especially in high-frequency or dexterous manipulation contexts.

Dataset Action dims (AA) Control Hz Naïve tokens FAST tokens Compression
BridgeV2 7 5 35 20 ×1.75\times1.75
DROID 7 15 105 29 ×3.6\times3.6
Table Bussing 7 20 140 28 ×5.0\times5.0
T-Shirt Fold 14 50 700 53 ×13.2\times13.2

Key results include:

  • High-frequency policy robustness: Naive binning fails on tasks >20 Hz; FAST maintains low reconstruction MSE up to 800 Hz.
  • Sample and compute efficiency: On Table Bussing, π₀+FAST achieves 90%90\% task success in 1/3\sim1/3 the updates required by diffusion π₀; in large-scale multitask learning (10k hours), π₀+FAST matches the final performance of diffusion models at 5×5\times less GPU compute.
  • Zero-shot generalization: π₀+FAST attains 60%\sim60\% average rubric score in first-ever zero-shot evaluation on unseen DROID environments, outperforming prior supervised baselines (Pertsch et al., 16 Jan 2025).

6. Design Considerations and Ablation Analyses

  • Chunk Length: 1-second chunks offer a balance between compression and long-horizon consistency. Shorter chunks slightly reduce token count, but degrade temporal coherence.
  • Flattening Order: "Column-first" (interleaving dimensions per frequency) yields superior rollout stability versus row-first ordering.
  • BPE Ablation: Direct tokenization of each quantized coefficient without BPE still compresses but results in 5×\approx5\times more tokens—primarily zeros—thus harming sample efficiency.
  • Control Frequency: FAST consistently maintains low reconstruction MSE from 25 to 800 Hz; naive binning incurs rapidly deteriorating performance above 100 Hz.
  • Inference Latency: Per-chunk autoregressive decoding requires \sim750 ms for 30–60 tokens, compared to 100 ms for diffusion π₀; further acceleration is possible via speculative decoding or quantized kernels.

7. Summary and Significance

FAST provides a DCT-quantization–BPE pipeline for transforming continuous robot trajectories into discrete, compact token sequences that are invertible and tunable. By alleviating the vanishing-information problem associated with naive discretizations and reducing the required autoregressive horizon, FAST enables efficient and scalable work with off-the-shelf vision-language Transformers in robotic control. The methodology supports broad applicability across robot morphologies and control frequencies, as demonstrated by the universal FAST⁺ tokenizer and its empirical performance on challenging dexterous and long-horizon tasks (Pertsch et al., 16 Jan 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Frequency-Space Action Sequence Tokenization (FAST).