Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics (2406.14558v3)

Published 20 Jun 2024 in cs.RO and cs.AI

Abstract: Enabling humanoid robots to clean rooms has long been a pursued dream within humanoid research communities. However, many tasks require multi-humanoid collaboration, such as carrying large and heavy furniture together. Given the scarcity of motion capture data on multi-humanoid collaboration and the efficiency challenges associated with multi-agent learning, these tasks cannot be straightforwardly addressed using training paradigms designed for single-agent scenarios. In this paper, we introduce Cooperative Human-Object Interaction (CooHOI), a framework designed to tackle the challenge of multi-humanoid object transportation problem through a two-phase learning paradigm: individual skill learning and subsequent policy transfer. First, a single humanoid character learns to interact with objects through imitation learning from human motion priors. Then, the humanoid learns to collaborate with others by considering the shared dynamics of the manipulated object using centralized training and decentralized execution (CTDE) multi-agent RL algorithms. When one agent interacts with the object, resulting in specific object dynamics changes, the other agents learn to respond appropriately, thereby achieving implicit communication and coordination between teammates. Unlike previous approaches that relied on tracking-based methods for multi-humanoid HOI, CooHOI is inherently efficient, does not depend on motion capture data of multi-humanoid interactions, and can be seamlessly extended to include more participants and a wide range of object types.

Citations (1)

Summary

  • The paper introduces CooHOI, a two-phase learning framework combining imitation learning and multi-agent reinforcement learning for cooperative human-object interaction tasks without extensive multi-agent motion capture data.
  • Experiments show CooHOI enables lifelike behaviors and successfully handles collaborative object transport up to 40kg, outperforming scratch training with reduced sample complexity.
  • This framework advances humanoid robot development for collaborative tasks like warehousing and home assistance, offering a scalable approach for future adaptive multi-agent coordination research.

Overview of CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics

The paper "CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics" introduces an innovative framework designed to address the challenges inherent in multi-humanoid collaborative tasks, particularly in the context of transporting large objects. The focal challenge arises due to a scarcity of motion capture data pertinent to multi-humanoid collaboration, coupled with the inherent complexities of multi-agent learning. The authors tackle this problem through a novel two-phase learning paradigm consisting of individual skill acquisition followed by a policy transfer phase.

Key Contribution and Methodology

The primary contribution of CooHOI lies in its structured approach to cooperative human-object interaction (HOI) tasks, divided into two distinct phases. Initially, individual humanoid characters learn object manipulation using imitation learning from human motion priors. The subsequent phase leverages centralized training and decentralized execution (CTDE) via multi-agent reinforcement learning to enable cooperative dynamics. This staged process allows individual agents to first attain proficiency in single-agent tasks before extending these skills into multi-agent collaborative strategies.

Unlike previous methodologies that depended on data-intensive tracking methods for multi-humanoid HOI, CooHOI circumvents these limitations through a feedback mechanism centered around manipulated object dynamics. The approach enables seamless scalability to include diverse object types and participant numbers without the need for extensive multi-agent motion capture datasets.

Experimental Validation and Results

The authors detail extensive experiments validating their framework across various object transportation tasks. The results underscore CooHOI's significant success in enabling humanoid characters to exhibit lifelike behaviors while executing cooperative tasks. The framework was proven to efficiently handle object weights up to 40 kg in two-agent settings, outperforming traditional scratch-training methods. Moreover, the experimental setup showed high success rates and precision in both single-agent and multi-agent contexts, highlighting advantages such as reduced sample complexity and increased versatility.

Implications and Future Directions

Practically, the CooHOI framework advances the development of humanoid robots, potentially transforming automation in sectors requiring collaborative manipulation of large and cumbersome objects, such as warehousing and home assistance. Theoretically, the success of CooHOI in integrating implicit communication via object dynamics paves the way for future research in adaptive multi-agent coordination schemes.

In future developments, integration of dexterous hand capabilities and enhanced navigational skills may further extend the framework’s applicability. Additionally, addressing sensor noise—an inherent challenge in real-world robotic applications—could enhance robustness and reliability. The framework's scalability and adaptability suggest promising avenues for exploration within cooperative AI, expanding its impact beyond traditional robotic domains.

In summary, the paper presents a rigorous and practical approach to advancing multi-humanoid collaborative tasks through an efficient learning framework, thus contributing significantly to the field of robotics and AI in cooperative settings.

Youtube Logo Streamline Icon: https://streamlinehq.com