Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 96 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 38 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 96 tok/s
GPT OSS 120B 466 tok/s Pro
Kimi K2 214 tok/s Pro
2000 character limit reached

DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration (2506.05064v2)

Published 5 Jun 2025 in cs.RO

Abstract: Imitation learning has shown great promise in robotic manipulation, but the policy's execution is often unsatisfactorily slow due to commonly tardy demonstrations collected by human operators. In this work, we present DemoSpeedup, a self-supervised method to accelerate visuomotor policy execution via entropy-guided demonstration acceleration. DemoSpeedup starts from training an arbitrary generative policy (e.g., ACT or Diffusion Policy) on normal-speed demonstrations, which serves as a per-frame action entropy estimator. The key insight is that frames with lower action entropy estimates call for more consistent policy behaviors, which often indicate the demands for higher-precision operations. In contrast, frames with higher entropy estimates correspond to more casual sections, and therefore can be more safely accelerated. Thus, we segment the original demonstrations according to the estimated entropy, and accelerate them by down-sampling at rates that increase with the entropy values. Trained with the speedup demonstrations, the resulting policies execute up to 3 times faster while maintaining the task completion performance. Interestingly, these policies could even achieve higher success rates than those trained with normal-speed demonstrations, due to the benefits of reduced decision-making horizons. Project Page: https://demospeedup.github.io/

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper presents a method that uses entropy-guided segmentation to accelerate human-collected demonstrations in visuomotor policy training.
  • It employs a clustering approach to separate high-entropy from low-entropy segments, achieving speed improvements of up to 278% while preserving success rates.
  • Experimental results in simulations and real-world setups highlight the method's potential to enhance efficiency in time-sensitive robotic applications.

DemoSpeedup: Accelerating Visuomotor Policies with Entropy-Guided Demonstrations

The research paper "DemoSpeedup: Accelerating Visuomotor Policies via Entropy-Guided Demonstration Acceleration," focuses on addressing the inherent inefficiencies seen in imitation learning when applied to robotic manipulation tasks. The paper proposes a novel self-supervised method called DemoSpeedup that accelerates visuomotor policy execution by using entropy-guided demonstration acceleration.

Overview of DemoSpeedup

DemoSpeedup is predicated on the observation that visuomotor policies trained on human demonstrations are often slow due to the tardiness of human-operated data collection methods. Typically, skilled operators collect demonstrations but face challenges due to equipment latency, morphological differences between humans and robots, and limited sensory input, among others. These obstacles lead to slower demonstrations that subsequently restrict the imitation learning-based policies' execution speed.

The core innovation presented is the entropy-guided precision measurement mechanism. This approach involves estimating per-frame action entropy using an intermediary generative policy trained on non-accelerated demonstrations. DemoSpeedup segments demonstration trajectories based on estimated entropy and accelerates segments classified as low-precision, thereby allowing for higher execution speed without compromising task completion success rates. This segmentation is realized through a clustering-based preprocessing approach that distinguishes high-entropy segments from low-entropy ones, thus allowing different acceleration rates.

Experimental Validation

The authors validate the efficacy of DemoSpeedup through extensive simulation and real-world experiments using popular imitative policies such as Action Chunking with Transformers (ACT) and Diffusion Policy (DP). Empirical results within simulated environments showed accelerated policies outperforming traditional test-time acceleration methods, achieving faster task completion times at rates up to three times faster compared to the original dataset-trained policies. Notably, the accelerated policies not only retained comparable success rates to slower policies but in some instances, even surpassed them due to reduced decision-making horizons.

In real-world setups, DemoSpeedup demonstrated remarkable proficiency by substantially decreasing completion times while maintaining performance across various tasks. Results showcased up to a 278% speedup with ACT-based policies and a 214% increment using DP-based policies on certain tasks, highlighting a potential advantage in specific contexts where higher speed and execution efficiency are paramount.

Implications and Future Directions

From a practical standpoint, DemoSpeedup represents a significant leap forward in enhancing robot efficiency in time-sensitive applications such as manufacturing, caregiving, and emergency response operations. The self-supervised manner of estimating precision requirements via action entropy is particularly valuable as it dispenses with the need for meticulous human annotations or predefined task-specific constraints, thereby broadening its applicability.

The theoretical implications suggest promising avenues for further exploration in self-supervised learning paradigms, particularly in understanding how entropy and randomness can serve as indicators for optimizing robotic behavior cloning. DemoSpeedup’s insights into reducing decision horizons and improving execution speeds without a loss in efficacy could influence the design of future generative models and present a pathway for reducing compounding errors in imitation learning frameworks.

Further research could focus on harmonizing dynamics between original and accelerated demonstrations to prevent potential performance drops. Additionally, tackling inherent inference delays seen in policies such as DP through distillation methods or alternative models could enhance acceleration outcomes considerably.

In conclusion, DemoSpeedup illustrates an efficient methodology to overcome the limitations posed by human-collected demonstrations in robotic manipulation tasks, while maintaining satisfactory task completion metrics. Through a combination of entropy estimation and strategic trajectory segmentation, this approach advances the capabilities of imitation learning in practical robotic applications.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube