Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Dynamics Models for Learning Dexterous Manipulation

Published 25 Sep 2019 in cs.RO and cs.LG | (1909.11652v1)

Abstract: Dexterous multi-fingered hands can provide robots with the ability to flexibly perform a wide range of manipulation skills. However, many of the more complex behaviors are also notoriously difficult to control: Performing in-hand object manipulation, executing finger gaits to move objects, and exhibiting precise fine motor skills such as writing, all require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. Learning-based techniques provide the appealing possibility of acquiring these skills directly from data, but current learning approaches either require large amounts of data and produce task-specific policies, or they have not yet been shown to scale up to more complex and realistic tasks requiring fine motor skills. In this work, we demonstrate that our method of online planning with deep dynamics models (PDDM) addresses both of these limitations; we show that improvements in learned dynamics models, together with improvements in online model-predictive control, can indeed enable efficient and effective learning of flexible contact-rich dexterous manipulation skills -- and that too, on a 24-DoF anthropomorphic hand in the real world, using just 4 hours of purely real-world data to learn to simultaneously coordinate multiple free-floating objects. Videos can be found at https://sites.google.com/view/pddm/

Citations (385)

Summary

  • The paper introduces the online planning with deep dynamics models (PDDM) approach that leverages deep neural network ensembles and MPC to efficiently learn dexterous manipulation.
  • It demonstrates how a 24-DoF anthropomorphic hand learns complex tasks like rotating Baoding balls with only four hours of real-world data.
  • The method outperforms traditional model-free techniques, providing faster learning, improved data efficiency, and robust performance in high-dimensional, contact-rich environments.

Overview of "Deep Dynamics Models for Learning Dexterous Manipulation"

The paper by Nagabandi, Konoglie, Levine, and Kumar explores model-based reinforcement learning (MBRL) for addressing the long-standing challenge of dexterous manipulation using robotic hands. Unlike simpler gripper mechanisms, multi-fingered hands with their additional degrees of freedom afford greater dexterity and are pivotal for complex manipulation tasks such as in-hand object manipulation and executing intricate finger gaits. The authors present an algorithm leveraging predictive models and sophisticated planning approaches to efficiently learn manipulation skills without relying on extensive task-specific data or handcrafted system dynamics.

Key Contributions

The paper introduces a novel method termed "online planning with deep dynamics models" (PDDM) which employs deep neural network ensembles to capture system dynamics and uses model-predictive control (MPC) for selecting optimal actions. This combination capitalizes on high-capacity models' ability to learn from limited data while incorporating uncertainty estimation to mitigate overfitting, particularly crucial in non-linear, contact-rich environments.

Significantly, the approach is shown to enable a 24-degree-of-freedom (DoF) anthropomorphic hand to learn dexterous manipulation tasks using only four hours of real-world data. The approach is underscored by use-cases such as rotating Baoding balls, which represent a high-dimensional challenge involving coordination across multiple touchpoints and complex contact dynamics. The method demonstrates marked improvements over existing model-free reinforcement learning (MFRL) approaches like SAC and NPG, in both data efficiency and performance flexibility.

Empirical Validation and Performance

The authors benchmark their method against prominent existing algorithms, including model-based methods like PETS and model-free methods like SAC, on tasks of varying complexity. In simpler tasks such as valve turning using a 9-DoF hand, most methods perform adequately, yet PDDM still achieves the fastest learning. However, in complex tasks like handwriting, where precision and arbitrary pattern following are crucial, PDDM significantly outperforms other algorithms, highlighting its capability to leverage learned models for greater flexibility and task adaptability.

A rigorous analysis of the method's design choices reveals that using ensembles, employing deep models with adequate model capacity, and integrating refined planning controllers like MPC with filtering and reward-weighted refinement are vital components contributing to PDDM's success.

Practical and Theoretical Implications

The study's results have broad implications for both the development of autonomous robotic systems and the theoretical underpinnings of reinforcement learning. Practically, it suggests a feasible pathway toward enabling complex, real-world dexterous manipulation using data-driven approaches, circumventing the traditionally prohibitive requirements for data and system characterization that hamper model-free methods. Theoretically, the work demonstrates how uncertainty-aware, model-based approaches can tackle complex high-dimensional tasks, previously believed to be within the exclusive purview of model-free methods or systems with known dynamics.

The research opens several future directions, including the exploration of hierarchical planning methods to extend scalability to even longer horizon tasks and the integration of richer sensor modalities such as vision or tactile feedback for enhanced interaction modeling.

In conclusion, the paper makes a compelling case for the utility of deep dynamics models in advancing dexterous robotic manipulation, offering vital insights that can influence future research in autonomous robotics and machine learning broadly.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.