Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

iCub3 Avatar System: Enabling Remote Fully-Immersive Embodiment of Humanoid Robots (2203.06972v2)

Published 14 Mar 2022 in cs.RO

Abstract: We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia (IIT). More precisely, the contribution of the paper is twofold: first, we present the humanoid iCub3 as a robotic avatar which integrates the latest significant improvements after about fifteen years of development of the iCub series; second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and face expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validate the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, non-verbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice - about 290 Km away - thus allowing the operator to visit remotely the Italian art exhibition. Second, we evaluated the optimised architecture for recipient physical collaboration and public engagement on-stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operates in Rimini - about 300 Km away - interacting with a recipient who entrusted the avatar a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the ANA Avatar XPrize competition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (21)
  1. Stefano Dafarra (26 papers)
  2. Kourosh Darvish (17 papers)
  3. Riccardo Grieco (3 papers)
  4. Gianluca Milani (2 papers)
  5. Ugo Pattacini (11 papers)
  6. Lorenzo Rapetti (16 papers)
  7. Giulio Romualdi (20 papers)
  8. Alessandro Scalzo (1 paper)
  9. Ines Sorrentino (7 papers)
  10. Silvio Traversaro (40 papers)
  11. Enrico Valli (3 papers)
  12. Paolo Maria Viceconte (4 papers)
  13. Giorgio Metta (20 papers)
  14. Marco Maggiali (4 papers)
  15. Daniele Pucci (86 papers)
  16. Carlotta Sartore (5 papers)
  17. Mohamed Elobaid (8 papers)
  18. Nuno Guedelha (7 papers)
  19. Connor Herron (1 paper)
  20. Alexander Leonessa (14 papers)
Citations (29)

Summary

  • The paper introduces a novel avatar system that enables immersive remote embodiment of humanoid robots via advanced teleoperation interfaces.
  • It employs a modular architecture integrating wearable devices and modified unicycle dynamics for precise manipulation and locomotion.
  • Empirical validations in real-world events demonstrate its potential for remote operations in sectors like telemedicine and disaster response.

Overview of the iCub3 Avatar System

The paper "iCub3 Avatar System: Enabling Remote Fully-Immersive Embodiment of Humanoid Robots" presents a comprehensive and detailed exploration of a novel avatar system designed to facilitate the remote teleoperation and embodiment of humanoid robots. The research focuses on iCub3, a humanoid robot that represents the latest advancements in the iCub series developed at the Istituto Italiano di Tecnologia (IIT). The system allows humans to embody humanoid robots, enabling remote verbal, non-verbal, and physical interactions with significant potential implications for various sectors, particularly those requiring remote operations in challenging environments.

Technical Components and Methodologies

The avatar system exhibits several technical innovations in robot teleoperation and teleperception. The system's architecture is modular, composed of teleoperation and teleperception interfaces, interlinked through a network that accommodates delays intrinsic to remote communications. The teleoperation interface relies on retargeting and control mechanisms that process the operator's movements via a set of wearable devices. These inputs are transformed into control commands for the humanoid robot, enabling intricate tasks like manipulation, locomotion, voice, and facial expression control.

Retargeting and Locomotion

The paper meticulously describes the retargeting mechanism, which translates the operator’s actions into robot motions. This system leverages a combination of devices, including HTC VIVE products, iFeel wearable technologies, and custom-developed manipulation and locomotion frameworks. Notably, the locomotion interface supports both short and long-distance movement using control strategies modeled after a modified unicycle dynamics framework, distinguishing between intentional and triggered motions to achieve efficient, obstacle-free movement in varying terrains.

Sensory Feedback and Communication

The iCub3 avatar system incorporates advanced sensory feedback loops, providing the operator with tactile, auditory, and visual cues from the robot’s environment. Sophisticated touch feedback systems, utilizing tactile skin sensors on the robot, relay interaction information back to the operator, enhancing the telepresence experience. Reliable communication, facilitated by robust middleware and network solutions like YARP and OpenVPN, ensures synchronization and real-time control between geographically separated systems.

Empirical Validation and Results

Validation of the iCub3 avatar system is performed through several case studies and real-world deployments, including participation in high-profile events like the ANA Avatar XPrize competition. The system demonstrated significant capabilities in environments requiring robust remote interactive features. Notable experiments include remote exploration of the Biennale di Venezia and appearances at the “We Make Future” festival, showcasing the system’s robustness and immersive experience even with novices operating the platform.

Implications and Future Prospects

The iCub3 avatar system holds promising implications for industries bounded by geographical constraints, offering potential applications in telemedicine, remote disaster response, and enhancing human-robot interaction in diverse settings. Additionally, the research opens avenues for further developments in humanoid robot autonomy, emphasizing the need for seamless control architectures that integrate human and autonomous decision-making for enhanced stability and operational efficiency.

Future Directions

The paper does not claim to present an exhaustive or flawless system but provides insights into areas necessitating improvements, such as operator cognitive load management and sensor limitations. The adaptive layering of control systems and increased modularity are suggested as pathways for future enhancement. Moreover, there is a call for exploring hybrid teleoperation systems that integrate improved machine learning algorithms for predictive modeling and decision-making.

Conclusion

The iCub3 Avatar System represents a significant advancement in humanoid robotics and teleoperation, embodying technological progress toward enabling richer human-robot collaboration in disparate applications. This research delineates a trajectory for ongoing improvements in robot embodiment and interaction frameworks, underlining the critical role of comprehensive avatar systems in current and future telepresence applications. The evolution of such systems is paramount for advancing robotic applications in complex, real-world environments where human presence is physically constrained yet critically necessary.

Youtube Logo Streamline Icon: https://streamlinehq.com