Papers
Topics
Authors
Recent
2000 character limit reached

Real-Time Chunking (RTC)

Updated 5 January 2026
  • Real-Time Chunking (RTC) is a method for decoupling high-latency inference from high-frequency control, achieving smooth action transitions.
  • It uses asynchronous chunk generation, prefix conditioning, and context-aware inpainting to maintain temporal continuity in applications like robotics and streaming ASR.
  • Training-time RTC simulates delay with prefix masking to eliminate costly backward passes, reducing latency and enhancing system responsiveness.

Real-Time Chunking (RTC) is a class of algorithmic strategies for managing prediction, control, and interfacing in high-latency settings, where outputs must be generated and consumed as fixed-length chunks under stringent real-time constraints. Originally motivated by the deployment of large Vision-Language-Action (VLA) models in robotic control and further extended to real-time streaming speech recognition, RTC provides a principled solution for smooth, low-latency actuation or output emission by decoupling high-latency inference from high-frequency control via asynchronous chunk generation, prefix conditioning, and context-aware inpainting.

1. Formal Definition and Algorithmic Foundations

Let p(Atot)p(A_t | o_t) be a chunking policy that, at controller timestep tt, predicts a block At=[at,at+1,,at+H1]A_t = [a_t, a_{t+1}, …, a_{t+H-1}] of HH future actions or outputs. The system executes only the first sHs \leq H elements of AtA_t (the "execution horizon") while asynchronously computing the next chunk. Inference takes dd controller timesteps, creating a mandatory overlap between the last dd steps of the previous chunk and the first dd steps of the new chunk. The core RTC constraint is dHsd \leq H-s so that the action prefix (the overlap) is always available when needed. This paradigm enforces temporal continuity and closed-loop responsiveness, mitigating mode-jumping and discontinuities prevalent in naively chunked or synchronous systems (Black et al., 9 Jun 2025).

The foundational implementation of RTC utilizes diffusion or flow-matching samplers (e.g., transformer velocity models vθ(x,τ)v_\theta(x,\tau) for τ[0,1]\tau \in [0,1]), employing inference-time "inpainting" (pseudoinverse guidance) to condition on action prefixes at each denoising step:

  • For the prefix x[0:d]\mathbf{x}[0:d], enforce hard equality with the prior chunk.
  • For timesteps di<Hsd \leq i < H-s, introduce a soft masking Wi[0,1]W_i\in[0,1] to nudge predictions toward continuity.
  • The total generation objective involves solving small linear systems via vector–Jacobian products (VJPs) at every sampler step (Black et al., 9 Jun 2025, Black et al., 5 Dec 2025).

Formally, at each denoising/integration step,

v~(Aτ,o,τ)=vπ(Aτ,o,τ)+λ(τ)(YA^1)Tdiag(W)AτA^1 ,\tilde{v}(A^{\tau}, o, \tau) = v_\pi(A^{\tau}, o, \tau) + \lambda(\tau)\, (Y - \hat{A}^1)^{T} \operatorname{diag}(W)\, \partial_{A^{\tau}} \hat{A}^1\ ,

where YY encodes frozen prefix actions, WW is the soft mask, and λ\lambda is a guidance weight (Black et al., 9 Jun 2025).

2. Methodological Innovations: Training-Time Conditioning

Traditional RTC incurs significant computational overhead at inference due to repeated VJP/backprop evaluations within each generation loop. To eliminate this cost, training-time action conditioning ("training-time RTC") has been developed (Black et al., 5 Dec 2025). Here, the effect of the prefix is simulated during training, such that inference can be performed with only forward passes and hard prefix clamping.

The training-time RTC method:

  • During training, sample an artificial delay dd from p(d)p(d) and mask the first dd steps of each target chunk as the prefix.
  • For i<di<d, override the noise schedule τi=1\tau_i=1 (i.e., no noise, ground-truth action used); for idi \geq d, set τiUniform(0,1)\tau_i \sim \mathrm{Uniform}(0,1).
  • Only the postfix timesteps (idi \geq d) contribute to the denoising (flow-matching) loss:

L(θ)=Ed,τ,ϵi=0H1mˉivθ(x,o,τ)[i](Ai+ϵi)2,L(\theta) = \mathbb{E}_{d,\tau,\epsilon} \sum_{i=0}^{H-1} \bar{m}_i \left\|\,v_\theta(x,o,\tau)[i] - (-A_i + \epsilon_i)\right\|^2,

with mi=1i<dm_i = 1_{i<d} the prefix mask (Black et al., 5 Dec 2025).

At inference, this enables pure forward-sampling: clamp the prefix and run the generative model as usual, leveraging the prefix-conditioned training.

3. RTC Across Application Domains

Robotics and Vision-Language-Action Models

RTC originated in robotics for VLA control, where billions-scale models induce inescapable inference latencies (Black et al., 9 Jun 2025). Key benchmarks such as Kinetix (12 dynamic simulated tasks) and real-world setups (UR5 bimanual manipulation: candle lighting, folding, plug insertion, etc.) have established the following:

  • RTC, when compared to synchronous chunking or naïve asynchrony, increases task completion success rates (e.g., up to 70% vs 30% at d=4d=4) and maintains constant real-time throughput up to +200+200 ms of injected delay (Black et al., 9 Jun 2025).
  • Training-time RTC delivers up to 20 points higher success at large delays (80% vs 60% at d=4d=4) and saves approximately 27 ms per chunk, with no additional training-side parameter cost (Black et al., 5 Dec 2025).

Speech Recognition and Streaming Inference

RTC has also been applied in streaming ASR, where chunked processing risks accuracy loss due to missing right context ("future" frames) (Le et al., 21 Feb 2025). The Time-Shifted Contextual Attention (TSCA) mechanism enables future context utilization at no additional user-perceived latency by rotation-based prefixing, and Dynamic Right Context (DRC) masking allows the model to generalize across varied future context windows during training.

On Librispeech:

  • RTC (TSCA+DRC) achieves word error rate (WER) reductions of 1013.9%10-13.9\% relative compared to conventional chunking, with no extra waiting time (Le et al., 21 Feb 2025).

4. Quantitative Evaluation and Experimental Findings

The impact of RTC and related schemes is substantiated by extensive empirical results:

Method/Setting Success Rate (Kinetix, d=4) Throughput (real robot, +200ms) WER Reduction (ASR)
Synchronous chunking 30 ±4% 0.08 ±0.01
Naïve async 22 ±4% 0.11 ±0.02
Bidirectional (BID) 45 ±4%
Temporal Ensembling (dense) 5 ±2% n/a (oscillations/failures)
RTC (hard mask) 60 ±3% 0.15 ±0.01
RTC (soft mask) 70 ±3% 0.15 ±0.01
Training-time RTC ~80% 27 ms lower latency
TSCA+DRC (ASR) 10–13.9% relative

RTC distinctly excels in delay-robustness, smoothness of transitions, and real-time efficiency relative to all baselines. In speech, TSCA+DRC yields statistically significant WER drops without additional system delay (Le et al., 21 Feb 2025, Black et al., 9 Jun 2025, Black et al., 5 Dec 2025).

5. Algorithmic Extensions and Orthogonal Enhancements

Orthogonal interventions such as Asynchronous Action Chunk Correction (A2C2) enhance RTC by adding a lightweight correction head that post-processes base chunked actions at every control step using new observations, positional indices, and policy features (Sendai et al., 27 Sep 2025). A2C2, when used in conjunction with RTC, further increases closed-loop responsiveness and success, with negligible computation (4.7 ms per step on SmolVLA vs 101 ms for base inference), and delivers up to 23 percentage points higher success in heavily delayed regimes.

RTC and such correction heads are model-agnostic and require no retraining or architectural modifications for integration, making them widely applicable in systems with diverse backbone architectures (Sendai et al., 27 Sep 2025).

6. Implementation Guidelines, Complexity, and Practical Considerations

RTC can be retrofitted onto existing diffusion/flow policies with minimal changes:

  • Training-time RTC: 3–5 lines added to the training loop for prefix sampling/masking, and simple prefix clamping at inference (Black et al., 5 Dec 2025).
  • Inference-time RTC: implemented as a multithreaded wrapper, requiring forward and backward passes for each denoising step; soft-masking and pseudoinverse guidance logic as per (Black et al., 9 Jun 2025).

Training-time conditioning entirely eliminates inference-side VJP/backprop steps, reducing latency (e.g., 108 ms vs 135 ms per chunk) and maintaining or exceeding performance under high-latency scenarios (Black et al., 5 Dec 2025). RTC imposes no extra parameter cost or buffer requirement beyond those present in standard policy architectures supporting per-token conditioning or attention.

In streaming ASR, RTC (via TSCA+DRC) is integrated at the attention mask and chunk-rotation level, requiring no structural overhaul or model retraining (Le et al., 21 Feb 2025).

7. Limitations and Future Research Directions

RTC’s effectiveness is bounded by the overlap constraint dHsd \leq H-s; very large dd relative to chunk size HH limits achievable smoothness and latency amortization. Pseudoinverse-guided inpainting can become unstable when guidance weights or the number of integration steps are set suboptimally. In training-time RTC, the granularity of delay simulation and masking sampling impacts robustness to real-world deployment delays.

Areas for further research include:

  • Extending adaptive horizon selection and dynamic execution scheduling for systems with unpredictable or nonstationary inference costs.
  • Joint modeling of RTC with streaming feedback correction methods (e.g., A2C2), exploiting their orthogonality.
  • Exploring RTC in domains beyond robotics and ASR, such as autonomous navigation or high-frequency financial trading, where strict latency constraints meet dynamically evolving environments.

RTC remains a critical methodology for the deployment of high-capacity, high-latency policies in real-time systems, enabling both continuity and responsiveness at scale (Black et al., 9 Jun 2025, Black et al., 5 Dec 2025, Le et al., 21 Feb 2025, Sendai et al., 27 Sep 2025).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Real Time Chunking (RTC).

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube