Dice Question Streamline Icon: https://streamlinehq.com

Mitigate catastrophic forgetting in generalist robotic manipulation policies

Develop training strategies and model designs that prevent catastrophic forgetting when generalist robotic manipulation policies are trained across many tasks, ensuring retention of previously learned skills while acquiring new ones.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper studies Large Behavior Models (LBMs)—multitask visuomotor policies trained on large, heterogeneous robot datasets—and evaluates their benefits over single-task models. While demonstrating gains from multitask pretraining, the authors note that catastrophic forgetting persists as an open research problem for generalist policies.

Addressing forgetting is critical for practical deployment of generalist policies that must continually incorporate new tasks and data without erasing prior capabilities.

References

Despite progress in training generalist policies, challenges such as catastrophic forgetting, data heterogeneity, scarcity of high-quality data, multimodal fusion, handling dexterity, and maintaining real-time inference speed remain open research problems.

A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation (2507.05331 - Team et al., 7 Jul 2025) in Section 2.1, Related Work—Robot Learning at Scale