Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hierarchical visuomotor control of humanoids

Published 23 Nov 2018 in cs.AI and cs.RO | (1811.09656v2)

Abstract: We aim to build complex humanoid agents that integrate perception, motor control, and memory. In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision. We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies. The resulting system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment. For a supplementary video link, see https://youtu.be/7GISvfbykLE .

Citations (95)

Summary

  • The paper introduces a hierarchical RL architecture combining pre-trained low-level motor policies with a visual high-level controller for coordinated humanoid control.
  • The method leverages control fragments derived from motion capture data, enabling rapid convergence and improved performance on tasks like navigation and foraging.
  • The approach offers practical insights into scalable, biologically inspired humanoid systems with potential for diverse real-world applications.

Overview of Hierarchical Visuomotor Control of Humanoids

The paper "Hierarchical Visuomotor Control of Humanoids" presents a methodology that leverages hierarchical reinforcement learning (RL) to achieve complex visuomotor tasks for humanoid agents. A significant challenge in RL involves handling the substantial dimensionality of both input and action spaces, especially when dealing with humanoid control which demands high degrees of freedom (DoF) and perception-driven actions. The authors propose an architecture combining pre-trained low-level motor control with a high-level task-directed controller to manage a simulated humanoid.

Architecture and Methodology

The proposed system bifurcates the control into a low-level controller responsible for basic motor skills derived from proprioceptive inputs and a high-level controller tasked with skill coordination using visual information. The low-level controllers are essentially pre-trained policies crafted from motion capture data, tracking specific humanoid movements such as walking or turning. The high-level controller, equipped with an egocentric vision system and memory, specializes in selecting and sequencing the low-level controllers to navigate complex tasks.

This separation allows the high-level controller to focus on task-level decision-making while relying on the robust low-level control to execute detailed motor actions. The low-level controllers are developed using an RL-based imitation learning approach, which makes them resilient to minor deviations by tracking time-indexed policies derived from motion capture data. The high-level controllers implement a decision policy using visual inputs, executing actions by selecting appropriate low-level skills.

Experiments and Results

The researchers tested their hierarchical approach on a suite of tasks such as obstacle navigation, foraging for targets, and a heterogeneous foraging task requiring memory. The control fragments, small segments of task-specific motor actions, were particularly effective, showcasing enhanced flexibility and adaptability by facilitating discrete switching among sub-policies. The approach's performance was benchmarked against models trained from scratch, revealing that pre-trained low-level controllers aided rapid convergence and enhanced task performance.

Additionally, the paper explores the impact of using various low-level controller types, including continuous steerable controllers and discrete switching controllers. While manually designed transitions in low-level controllers can lead to smoother execution, they require significant effort. In contrast, cold-switching using control fragments can scale with less manual intervention but at the cost of occasional jerky transitions.

Implications and Future Work

This research contributes to a growing interdisciplinary field encompassing neuroscience, animation, and robotics. By aligning with neural principles observed in animal locomotion, the framework potentially offers insights into how biological systems achieve efficient motor control.

Practically, the hierarchical architecture demonstrates promise for applications requiring humanoid robots to perform complex, real-world tasks. The emphasis on an egocentric visual system also aligns well with future real-world deployment, where robots must operate based on visual feedback.

The work underscores the need for balancing task abstraction across hierarchical levels, suggesting that higher-level decisions should exploit the versatility of a well-encapsulated low-level motor skill library. Future investigations could focus on refining the transitions to further minimize artifacts and explore unsupervised approaches to fragment combination, potentially enabling more graceful behavior synthesis without manual curation.

In conclusion, the paper presents a comprehensive study on hierarchical visuomotor control, highlighting the effectiveness of modular, pre-trained components in tackling high-dimensional reinforcement learning challenges in humanoid agents.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.