Papers
Topics
Authors
Recent
Search
2000 character limit reached

Latent-Aware Action Streaming (LAAS)

Updated 3 February 2026
  • Latent-Aware Action Streaming (LAAS) is an execution-time strategy that restores temporal alignment between perception and action in dynamic object manipulation.
  • The method uses a novel chunk-wise streaming regime that overlaps inference and execution by applying gating rules to select the freshest valid action.
  • Empirical results show significant improvements in manipulation success rates, with full integration yielding up to a 16.8% boost over baseline methods.

Latent-aware Action Streaming (LAAS) is an execution-time algorithmic module introduced in the DynamicVLA framework to restore and enforce real-time temporal alignment between multimodal perception, model-based action generation, and execution in dynamic object manipulation. LAAS addresses the critical problem encountered by Vision-Language-Action (VLA) models operating in environments where the target’s physical state (6-DoF pose and velocity) evolves continually, even during the inference and execution pipeline, resulting in perception-execution misalignment and degraded manipulation precision. LAAS provides a principled mechanism for streaming the most temporally relevant predicted actions to the controller by adopting a chunk-wise inference and execution scheme governed by explicit gating rules and overlap resolution strategies, yielding statistically significant gains in dynamic manipulation success rates under low-latency closed-loop constraints (Xie et al., 29 Jan 2026).

1. Motivation and Problem Statement

DynamicVLA targets environments where objects are in motion during both perception and action generation. In these settings, inference latency mm causes a temporal disconnect: at time tt, the model observes the world and predicts an action chunk {at,,at+n}\{\mathbf{a}_t, \ldots, \mathbf{a}_{t+n}\}. However, actions meant for timesteps near tt will execute after a delay and may therefore be stale, as the physical state st\mathbf{s}_t may have significantly changed by execution time t+mt+m. Standard chunk-based execution (“predict and execute to completion”) accumulates a perception–execution gap, while Continuous Inference (CI) alone introduces overlapping predictions for the same control times, risking conflict.

LAAS targets two core issues:

  • Perception–Execution Gap: Outdated actions occur when executed after significant inference delay.
  • Overlapping Chunk Conflicts: With CI, multiple chunks may propose actions for the same timestep, leading to ambiguity in real-time selection.

2. Integration within the DynamicVLA Pipeline

The DynamicVLA architecture comprises:

  1. A compact 0.4B parameter Vision-Language backbone (FastViT + SmolLM2-360M) encoding short time windows of visual, linguistic, and proprioceptive inputs (Ot,Lt,Pt)(\mathbf{O}_t, \mathbf{L}_t, \mathbf{P}_t).
  2. A diffusion-based action expert Eθ\mathcal{E}_\theta, conditioned on backbone features ft\mathbf{f}_t, producing an action chunk At={at,,at+n}\mathbf{A}_t = \{\mathbf{a}_t,\ldots, \mathbf{a}_{t+n}\}.
  3. An execution controller implementing both Continuous Inference and LAAS.

Each cycle involves:

  • Asynchronous inference for At\mathbf{A}_t.
  • Upon chunk availability, scheduling the next inference for At+m\mathbf{A}_{t+m}.
  • At each control step τ\tau, LAAS selects the freshest valid action using its gating function.

This streaming regime ensures that inference and execution are fully overlapped, and that for each timestep, action selection dynamically prioritizes temporal relevance.

3. Formal Definition and Algorithm

Given:

  • m=m= inference latency in control steps,
  • n=n= action chunk length (n>mn > m by design),
  • At inference start tt, model outputs At={at,,at+n}\mathbf{A}_t = \{\mathbf{a}_t, \dots, \mathbf{a}_{t+n}\}, which become available at t+mt+m.

Define the set of chunk start times:

T={t,t+m,t+2m,}.\mathcal{T} = \{t,\, t+m,\, t+2m,\, \ldots\}.

For chunk tjTt_j \in \mathcal{T} and control step τ\tau, the gating function: g(tj,τ)={1if tj+mτtj+n, 0otherwise.g(t_j, \tau) = \begin{cases} 1 & \text{if } t_j + m \leq \tau \leq t_j + n, \ 0 & \text{otherwise.} \end{cases}

LAAS executes at time τ\tau: aτ=At[τt],wheret=argmaxtjT:g(tj,τ)=1tj\mathbf{a}_\tau = \mathbf{A}_{t^*}[\tau - t^*], \quad \text{where} \quad t^* = \arg\max_{t_j \in \mathcal{T}:\, g(t_j, \tau)=1} t_j This ensures that, among all eligible chunks, the most recently generated one (with the latest tjt_j) supplies the action for τ\tau.

The streaming execution process is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Initialize t ← 0
Launch inference for chunk A_t asynchronously
next_infer_finish ← undefined
While not done:
  if inference for A_t finishes at current time τ:
    store A_t, set available_time[t] ← τ
    schedule next inference for A_{t+m}
  valid_chunks ← {t_j | available_time[t_j] + 0 ≤ τ ≤ t_j + n}
  if valid_chunks non-empty:
    t* ← max(valid_chunks)
    execute a = A_{t*}[τ − t*]
  else:
    hold previous action or safe default
  τ ← τ + 1  # advance control timestep

Key characteristics:

  • Inference and execution are overlapped; there is never inter-chunk waiting.
  • Actions predicted for tjτ<tj+mt_j \leq \tau < t_j + m are treated as stale and discarded.
  • When multiple chunks overlap, only the action from the freshest chunk (highest tjt_j) is executed at that step.

4. Hyperparameters and Implementation Choices

LAAS requires setting several critical hyperparameters:

  • Action chunk length nn: Set as 20 control timesteps.
  • Inference delay mm: Typically 2–4 control steps, determined by model and hardware. For DynamicVLA running at 88 Hz on an A6000 GPU, m2m \approx 2 at a 25 Hz control rate.
  • Requirement: n>mn > m is necessary to prevent execution starvation.
  • Action buffer: No explicit multi-chunk buffer; only the latest two chunks are stored, and the gating rule discards obsolete vectors.

The chunk-based scheme is adaptive: the gating approach supports variable inference latencies as hardware or model size changes, and scales to different operation frequencies.

5. Training Protocol and Theoretical Considerations

LAAS is an execution-phase strategy; it does not introduce new parameters or loss terms into model training. The action expert Eθ\mathcal{E}_\theta is trained with a standard flow-matching diffusion loss, as in Equation (1) of the paper: τ(θ)=Ep(Atft)q(AtτAt)Eθ(Atτ,Ot)u(AtτAt)2\ell^\tau(\theta) = \mathbb{E}_{p(\mathbf{A}_t|\mathbf{f}_t)\,q(\mathbf{A}_t^\tau|\mathbf{A}_t)} \left\| \mathcal{E}_\theta\left(\mathbf{A}_t^\tau,\mathbf{O}_t\right) - \mathbf{u}\left(\mathbf{A}_t^\tau \mid \mathbf{A}_t\right) \right\|^2 The streaming mechanism acts solely at runtime to enforce temporal reliability under real-time control, ensuring no alteration of the underlying action-generation distribution.

6. Empirical Performance and Ablation Analyses

Empirical evaluation using the Dynamic Object Manipulation (DOM) benchmark highlights the individual and combined contributions of LAAS and CI. The reported metrics (success rate SR) are:

Execution Mode SR (%)
Baseline (No CI, No LAAS) 30.3
+LAAS Only 36.1
+CI Only 39.7
CI + LAAS (Full) 47.1

LAAS alone outperforms the static-chunk baseline by 5.8% absolute, and the full combination of CI and LAAS achieves a 16.8% improvement over baseline. Cross-model integration ablations show that introducing LAAS + CI to alternative VLAs (e.g., SmolVLA) increases their SR from ~12.7% to ~25.6%. This suggests robust generalizability of the streaming approach (Xie et al., 29 Jan 2026).

7. Significance and Implications

LAAS provides a generalizable, model-agnostic, and infrastructure-light solution for tightly coupling prediction and control in dynamic environments. Its formal gating and chunk selection policies directly address underexplored failure modes in chunked and CI-based action pipelines, specifically temporal misalignment and chunk-overlap ambiguity. The empirical gains in dynamic manipulation substantiated on the DOM benchmark indicate that LAAS constitutes a critical advancement for real-time adaptive control in VLA-driven systems.

A plausible implication is that similar gating-based execution regimes could benefit other sequential decision making architectures where model latency is non-negligible relative to environmental dynamics, particularly in robotic, reinforcement learning, and closed-loop synthesis domains. The mechanism’s independence from training pipeline modifications increases its practical applicability as a deployment-side augmentation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Latent-Aware Action Streaming (LAAS).