Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Haptic Model Predictive Control for Robot-Assisted Dressing (1709.09735v3)

Published 27 Sep 2017 in cs.RO, cs.AI, and stat.ML

Abstract: Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.

Citations (79)

Summary

  • The paper presents Deep Haptic Model Predictive Control (MPC), a method for robot-assisted dressing designed to mitigate high forces exerted on the human body.
  • This approach trains a deep recurrent network using physics-based simulations to predict garment-body interaction forces from robot haptic and kinematic data, bypassing complex vision systems.
  • Evaluations on a PR2 robot demonstrated successful low-force dressing, showing emergent behaviors like navigating limbs, highlighting the value of accurate force prediction and simulation-based training for safe assistance.

Deep Haptic Model Predictive Control for Robot-Assisted Dressing

This paper presents a method termed Deep Haptic Model Predictive Control (MPC), designed for enhancing robot-assisted dressing, a task beneficial for individuals with disabilities. The central theme revolves around mitigating high forces inadvertently applied by robots during fabric manipulation, which is crucial in ensuring safe and autonomous dressing assistance. The paper employs a deep recurrent model capable of predicting the forces a garment exerts on a human body, leveraging only haptic and kinematic data, thus bypassing the need for complex visual systems.

Methodology

The researchers have proposed a novel approach, wherein a deep recurrent neural network is trained using data from physics-based simulations, enabling the model to predict contact forces using haptic sensor input from the robot's end effector. This choice circumvents the cumbersome process of gathering real-world data, which can be both risky and cost-prohibitive. The solution consists of an estimator and a predictor network, where the estimator outputs force location and magnitude, and the predictor anticipates future measurements based on proposed actions.

Model training focuses on diverse dressing scenarios programmed into a simulation environment, yielding comprehensive training data through randomized robotic actions. Key variables such as fabric properties and end-effector trajectories were optimized using Covariance Matrix Adaptation Evolution Strategy (CMA-ES), aligning with actual robotic data to ensure relevant predictions.

Application with the PR2 Robot

Evaluation of this system involved outfitting a PR2 robot to execute garment dressing tasks on human participants. The researchers explored two scenarios: full arm dressing, and predicting garment catches on a person's fist. Various prediction horizons (0.01s, 0.05s, and 0.2s) were tested. Significantly, longer horizons led to better task performance, as the ability to foresee and mitigate future high-force contacts became evident.

Notably, with a 0.2s horizon, the PR2 demonstrated emergent behaviors, such as skillfully navigating a garment around a person's elbow—an intricate maneuver emphasizing the utility of robust predictive capabilities. The results indicate that precise force prediction leads to decreased risk of garment impingement and improved dressing efficiency.

Implications and Future Directions

The research highlights several pivotal implications. Theoretically, the paper contributes to the growing corpus of knowledge in model-based robotic control for assistive tasks, emphasizing a low-force paradigm as a critical metric for human-robot interactions. Practically, the method enhances assistive robotic systems, fostering greater autonomy and safety in delicate tasks like dressing.

The utilization of simulated environments for training holds promise for future AI developments, potentially reducing reliance on laborious real-world data collection. This approach suggests possible expansions into other fabric manipulation tasks or differing assistive functions where haptic feedback is paramount.

Additionally, while the current paper is constrained to static human poses for consistency, future exploration could integrate dynamic state estimation, potentially through vision-based systems, to accommodate varied human postures in real time. Further computational advancements could mitigate the limitations associated with prediction horizons and action computation rates, thereby broadening the scope and efficiency of such robotic systems.

In conclusion, this paper successfully demonstrates that leveraging deep haptic MPC can significantly refine the capabilities of assistive robots in dressing applications, showcasing a promising trajectory for the development of intelligent, safe, and efficient human-centric robotic solutions.

Youtube Logo Streamline Icon: https://streamlinehq.com