Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
Gemini 2.5 Pro Pro
o3 Pro
GPT-4.1 Pro
DeepSeek R1 via Azure Pro
2000 character limit reached

Controller for Humanoid Imitation & Live Demo

Updated 5 August 2025
  • The paper introduces CHILD as a compact, wearable teleoperation system that maps human joint movements directly to a humanoid robot for high-fidelity imitation.
  • It features dual operational modes: full-body direct joint mapping for precise limb control and loco-manipulation for integrated navigation and manipulation tasks.
  • Adaptive force feedback at each joint enhances operator safety by providing haptic cues and ensuring responsive, low-latency real-time motion transmission.

The Controller for Humanoid Imitation and Live Demonstration (CHILD) is a whole-body humanoid teleoperation system designed to enable joint-level control over humanoid robots through a compact, reconfigurable hardware and software architecture. Developed to address limitations of prior teleoperation approaches, CHILD allows an operator to control all four limbs—arms and legs—of a target humanoid, supporting both direct one-to-one joint mapping for high-fidelity imitation as well as shared loco-manipulation tasks. The system is physically realized within a standard wearable baby carrier, integrates adaptive force feedback to enhance operator safety and experience, and provides open-source hardware and software resources to facilitate accessibility and reproducibility within the research community (Myers et al., 31 Jul 2025).

1. System Architecture and Physical Design

The CHILD system is structured as a wearable, modular teleoperation platform assembled within a conventional baby carrier, rendering it both compact and suitable for mobile or stationary operation. All power, electronics, and physical teleoperation mechanisms are internally housed, so a single operator can don the system and remain ambulatory while teleoperating a humanoid robot, or it can be mounted to a monitor stand for fixed operation.

The hardware features a set of seven exchangeable mounts:

  • Two mounts are provided for each leg,
  • Four mounts are for arm control (two parallel to the ground, two offset at 45 degrees to accommodate different robot shoulder inclinations),
  • One mount is dedicated for neck or torso control.

Leader limbs attached to these mounts mirror the kinematic configuration of the follower robot, with scaling factors (e.g., α = 0.65 for custom robots, α = 0.9 for Unitree G1) applied as needed. Joints are actuated by DYNAMIXEL XL330-M288-T servos with high-resolution encoders for accurate joint angle measurement and active force feedback. An onboard 9-axis BNO055 IMU in the torso senses the operator’s body orientation and relays corresponding signals to the robot's torso.

Each mount employs a print-in-place retainer and pogo pin connectors for power and data, allowing for rapid reconfiguration when transitioning between different robot types or for maintenance.

2. Functional Control Modes

CHILD enables two principal operational modes tailored to the requirements of humanoid teleoperation:

  • Full-body Direct Joint Mapping: A Direct Joint Controller reads all leader joint states and directly issues corresponding target joint commands for the follower robot, affording the operator complete one-to-one kinematic control over both upper and lower limbs. This supports live, high-degree-of-freedom gesture and posture teleoperation.
  • Loco-manipulation Mode: For tasks where only partial joint-level control is needed, the operator can deactivate a subset of limbs (e.g., by a sustained gripper command). When an arm is deactivated, the corresponding leg’s joint movements are interpreted as a “joystick,” issuing velocity commands (forward/backward, lateral, and yaw) to the robot’s locomotion controller. This hybrid mode enables switching between fine joint control and high-level navigation within a single interface.

Internally, the Joint State Subscriber, Locomotion Controller, and Direct Joint Controller modules run asynchronously, sharing state information. Average end-to-end latency is approximately 14 ms, supporting responsive real-time teleoperation.

3. Adaptive Force Feedback and Safety Mechanisms

A central safety and usability innovation is the integration of adaptive force feedback at every leader joint. Each joint is spring-loaded with a virtual bias torque computed as:

τbias=k(q(t)qbase)\tau_{\text{bias}} = \mathbf{k} \left( q(t) - q_{\text{base}} \right)

where q(t)q(t) is the instantaneous measured joint position, qbaseq_{\text{base}} is the neutral “rest” configuration, and k\mathbf{k} is a diagonal matrix of user-specified spring constants per joint.

Adaptive force feedback provides multiple benefits:

  • Augmented proprioception: Operators receive haptic cues about joint limits, singular postures, and excessive extension via increasing resistance.
  • Safety: When limbs are released, joints return to safe base positions, preventing accumulation of unsafe configurations.
  • Mode-adaptive stiffness: For limbs repurposed as velocity control (e.g., in loco-manipulation), force feedback for the corresponding arm is strengthened to resist unintentional movement.

These feedback mechanisms are user-configurable to accommodate operator strength and task requirements.

4. Demonstrated Applications

CHILD’s modularity and generality make it suitable for a wide variety of teleoperation and imitation tasks, demonstrated on several platforms:

Application Mode Robotic Platform Example Description
Loco-manipulation Humanoid (full-body) Simultaneous direct upper-body control and leg-based navigation (e.g., fetch-and-place tasks)
Full-body teleoperation Humanoid (e.g., ball games) Both arms and legs controlled directly for tasks like catching and passing objects with feet
Multi-limb cooperative task Simulation (crawling) Multiple operators each control limbs or torso, demonstrating synchronized multi-agent coordination
Non-humanoid configuration Dual-arm kitchen robot Upper limbs managed via customized mount configuration, demonstrating flexible adaptation

These scenarios highlight the system’s capacity for live demonstration, dexterous object manipulation, and mobile navigation—all with responsive, user-driven control.

5. Modularity, Extensibility, and Open-Source Availability

All hardware components, 3D-printable design files, actuator-device mappings, and associated electronics schematics are released as open-source materials, significantly lowering the barrier for replication and further development. By targeting a total material and electronic cost under $1,000 and relying on readily available off-the-shelf parts, CHILD can be widely adopted even in resource-limited research settings. Compliance with a variety of humanoid robots (Unitree G1, Boston Dynamics Atlas, custom arms, and dual-arm platforms) is achieved through straightforward mechanical reconfiguration.

The open-source repository is hosted at https://uiuckimlab.github.io/CHILD-pages, which includes detailed documentation, assembly instructions, and reference implementations of teleoperation software.

6. Technical and Research Implications

The CHILD system fills a critical gap in humanoid teleoperation research by enabling, for the first time, compact wearable whole-body joint-level teleoperation with force feedback on all major joints. This architecture directly addresses several longstanding challenges:

  • Joint-level whole-body control allows full utilization of humanoid robots for expressive live demonstrations and complex manipulation.
  • Safety-aware adaptive force feedback mitigates risks related to singular configurations, excessive extension, and operator fatigue.
  • Modular, open-source platform democratizes access and enables broad evaluation and iterative improvement.
  • Low-latency, high-fidelity mapping between operator and robot ensures immediate correspondence, which is essential for nuanced human-robot interaction studies.

The design advances the development of imitation learning pipelines, human-robot skill transfer, and provides a robust basis for future teleoperation systems requiring intuitive, high-DOF human-machine interfacing in research and real-world environments.

7. Accessibility and Prospects for Future Work

By open-sourcing both hardware and software, CHILD is positioned as a foundation for new teleoperation paradigms, facilitating reproducibility and community-driven extension. Future research directions enabled by the system may include experimental validation of joint-level imitation learning, exploration of shared autonomy (with operator- and AI-driven joint blending), integration with multi-sensory feedback (vision, haptics), and evaluation in collaborative or adversarial multi-agent scenarios.

In summary, CHILD is a wearable, modular, and reconfigurable teleoperation system delivering real-time, whole-body joint-level control of humanoid robots, with integrated adaptive force feedback for enhanced safety and operator experience, and is openly accessible for further research (Myers et al., 31 Jul 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)