Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (2307.05933v1)

Published 12 Jul 2023 in cs.RO and cs.AI

Abstract: Human bimanual manipulation can perform more complex tasks than a simple combination of two single arms, which is credited to the spatio-temporal coordination between the arms. However, the description of bimanual coordination is still an open topic in robotics. This makes it difficult to give an explainable coordination paradigm, let alone applied to robotics. In this work, we divide the main bimanual tasks in human daily activities into two types: leader-follower and synergistic coordination. Then we propose a relative parameterization method to learn these types of coordination from human demonstration. It represents coordination as Gaussian mixture models from bimanual demonstration to describe the change in the importance of coordination throughout the motions by probability. The learned coordinated representation can be generalized to new task parameters while ensuring spatio-temporal coordination. We demonstrate the method using synthetic motions and human demonstration data and deploy it to a humanoid robot to perform a generalized bimanual coordination motion. We believe that this easy-to-use bimanual learning from demonstration (LfD) method has the potential to be used as a data augmentation plugin for robot large manipulation model training. The corresponding codes are open-sourced in https://github.com/Skylark0924/Rofunc.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. K. Yao, D. Sternad, and A. Billard, “Hand pose selection in a bimanual fine-manipulation task,” Journal of Neurophysiology, vol. 126, no. 1, pp. 195–212, 2021.
  2. J. Lee and P. H. Chang, “Redundancy resolution for dual-arm robots inspired by human asymmetric bimanual action: Formulation and experiments,” in 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6058–6065, IEEE, 2015.
  3. L. Shi, S. Kayastha, and J. Katupitiya, “Robust coordinated control of a dual-arm space robot,” Acta Astronautica, vol. 138, pp. 475–489, 2017.
  4. B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
  5. P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in 2009 IEEE International Conference on Robotics and Automation, pp. 763–768, IEEE, 2009.
  6. L. P. Ureche and A. Billard, “Constraints extraction from asymmetrical bimanual tasks and their use in coordinated behavior,” Robotics and autonomous systems, vol. 103, pp. 222–235, 2018.
  7. E. Gribovskaya and A. Billard, “Combining dynamical systems control and programmingby demonstration for teaching discrete bimanual coordination tasks to a humanoid robot,” in Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, pp. 33–40, 2008.
  8. F. Krebs and T. Asfour, “A bimanual manipulation taxonomy,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11031–11038, 2022.
  9. J. Liu, Y. Chen, Z. Dong, S. Wang, S. Calinon, M. Li, and F. Chen, “Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5159–5166, 2022.
  10. Z. Sun, Z. Wang, J. Liu, M. Li, and F. Chen, “Mixline: A hybrid reinforcement learning framework for long-horizon bimanual coffee stirring task,” in International Conference on Intelligent Robotics and Applications, pp. 627–636, Springer, 2022.
  11. S. Calinon, T. Alizadeh, and D. G. Caldwell, “On improving the extrapolation capability of task-parameterized movement models,” in 2013 IEEE/RSJ international conference on intelligent robots and systems, pp. 610–616, IEEE, 2013.
  12. S. Calinon, “A tutorial on task-parameterized movement learning and retrieval,” Intelligent service robotics, vol. 9, no. 1, pp. 1–29, 2016.
  13. J. Liu, C. Li, D. Delehelle, Z. Li, and F. Chen, “Rofunc: The full process python package for robot learning from demonstration and robot manipulation,” June 2023.
  14. J. Silvério, S. Calinon, L. Rozo, and D. G. Caldwell, “Bimanual skill learning with pose and joint space constraints,” in 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), pp. 153–159, IEEE, 2018.
  15. S. Shetty, J. Silvério, and S. Calinon, “Ergodic exploration using tensor train: Applications in insertion tasks,” IEEE Transactions on Robotics, vol. 38, no. 2, pp. 906–921, 2021.
  16. J. Liu, Z. Li, S. Calinon, and F. Chen, “Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer,” arXiv preprint arXiv:2306.12677, 2023.
Citations (6)

Summary

  • The paper introduces BiRP, a novel method that uses relative parameterization with Gaussian Mixture Models to extract coordination from human demonstrations.
  • It distinguishes between leader-follower and synergistic motion generation, enabling dynamic adaptation in bimanual robotic tasks.
  • Evaluation in synthetic and real-world scenarios confirms BiRP’s ability to minimize motion costs and improve synchronization in robotic arms.

Overview of BiRP: Learning Robot Generalized Bimanual Coordination

This paper presents BiRP, an innovative approach for enhancing bimanual coordination in robotics through a method termed Relative Parameterization (RP). The authors tackle the intricate challenge of synchronizing two robotic arms by drawing inspiration from human bimanual manipulation. The focus is on learning and generalizing this coordination from human demonstrations.

Key Contributions

  1. Coordination Parameterization: The paper introduces the BiRP method, which leverages Gaussian Mixture Models (GMMs) to extract coordination characteristics from bimanual demonstrations. By using a dynamic frame of reference based on the motion of the other arm, BiRP captures the relative motion and represents it probabilistically. This approach helps in understanding the shifting importance of coordination as tasks progress.
  2. Leader-Follower Motion Generation: Distinguishing between leader and follower arms, BiRP generates corresponding bimanual motions. This adaptability enables one arm's motion to inform and adjust the trajectory of the other, maintaining effective coordination.
  3. Synergistic Motion Generation: When roles are not explicitly defined between the arms, BiRP enables simultaneous adaptation of both arms to new situations, ensuring that coordination is maintained across different task parameters.

Methodology

The authors divide bimanual tasks into leader-follower and synergistic coordination types. BiRP employs Relative Parameterization by using GMMs that incorporate task-specific and relative observations dynamically. This method allows the encoding of implicit coordination information, which can be reused in new task scenarios.

  • Demonstration Representation: Bimanual demonstrations are represented by task-parameterized GMMs, transforming demonstrations in various frames to extract generalized skills.
  • Control and Motion Generation: The paper integrates coordination consideration into the control phase using a revised Linear Quadratic Tracking (LQT) controller. The overall motion cost is minimized by considering both task and coordination costs, leading to the generation of smooth and coordinated robot trajectories.

Evaluation

The authors demonstrate the effectiveness of BiRP through synthetic and real-world scenarios:

  • Synthetic Data: Using Bezier curves, the paper illustrates the method's robustness across 2D and 3D scenarios. The ability of BiRP to maintain coordination while adapting to new task parameters is highlighted.
  • Real Robot Experiment: Using a humanoid robot, the method is applied to tasks such as palletizing and pouring, showing successful adaptation to different object positions and task parameters.

Implications and Future Directions

The research sheds light on the potential of learning from demonstrations (LfD) for complex robotic manipulations, emphasizing the need for coordination beyond straightforward single-arm tasks. The integration of relative parameterization in bimanual robotics models a step toward replicating human dexterity and adaptability in humanoid robots.

Future research directions could explore the application of this methodology to joint space coordination or whole-body coordination, addressing current limitations and enhancing real-time processing capabilities. Moreover, BiRP could serve as a robust data augmentation tool in training large manipulation models.

In conclusion, the BiRP method provides a solid foundation for advancing bimanual coordination in humanoid robotics, underscoring the necessity of incorporating dynamic, probabilistic frameworks derived from human demonstrations for enhanced adaptability and functionality in robotic applications.