Body Design and Gait Generation of Chair-Type Asymmetrical Tripedal Low-rigidity Robot (2404.05932v1)
Abstract: In this study, a chair-type asymmetric tripedal low-rigidity robot was designed based on the three-legged chair character in the movie "Suzume" and its gait was generated. Its body structure consists of three legs that are asymmetric to the body, so it cannot be easily balanced. In addition, the actuator is a servo motor that can only feed-forward rotational angle commands and the sensor can only sense the robot's posture quaternion. In such an asymmetric and imperfect body structure, we analyzed how gait is generated in walking and stand-up motions by generating gaits with two different methods: a method using linear completion to connect the postures necessary for the gait discovered through trial and error using the actual robot, and a method using the gait generated by reinforcement learning in the simulator and reflecting it to the actual robot. Both methods were able to generate gait that realized walking and stand-up motions, and interesting gait patterns were observed, which differed depending on the method, and were confirmed on the actual robot. Our code and demonstration videos are available here: https://github.com/shin0805/Chair-TypeAsymmetricalTripedalRobot.git
- M. Shinkai, “Suzume,” [Film], 2022.
- C. Liu, Q. Chen, and D. Wang, “Cpg-inspired workspace trajectory generation and adaptive locomotion control for quadruped robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 3, pp. 867–880, 2011.
- T. Tanikawa, Y. Masuda, and M. Ishikawa, “A reciprocal excitatory reflex between extensors reproduces the prolongation of stance phase in walking cats: Analysis on a robotic platform,” Frontiers in Neurorobotics, vol. 15, p. 636864, 2021.
- G. Bledt, M. J. Powell, B. Katz, J. Di Carlo, P. M. Wensing, and S. Kim, “Mit cheetah 3: Design and control of a robust, dynamic quadruped robot,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 2245–2252.
- J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim, “Dynamic locomotion in the mit cheetah 3 through convex model-predictive control,” in 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2018, pp. 1–9.
- J. Lee, J. Hwangbo, and M. Hutter, “Robust recovery controller for a quadrupedal robot using deep reinforcement learning,” arXiv preprint arXiv:1901.07517, 2019.
- J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning quadrupedal locomotion over challenging terrain,” Science robotics, vol. 5, no. 47, p. eabc5986, 2020.
- X. Huang, Z. Li, Y. Xiang, Y. Ni, Y. Chi, Y. Li, L. Yang, X. B. Peng, and K. Sreenath, “Creating a dynamic quadrupedal robotic goalkeeper with reinforcement learning,” arXiv preprint arXiv:2210.04435, 2022.
- T. Haarnoja, B. Moran, G. Lever, S. H. Huang, D. Tirumala, M. Wulfmeier, J. Humplik, S. Tunyasuvunakool, N. Y. Siegel, R. Hafner, et al., “Learning agile soccer skills for a bipedal robot with deep reinforcement learning,” arXiv preprint arXiv:2304.13653, 2023.
- K. Kawaharazuka, K. Okada, and M. Inaba, “Realization of seated walk by a musculoskeletal humanoid with buttock-contact sensors from human constrained teaching,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2022, pp. 5774–5780.
- K. Sims, “Evolving Virtual Creatures,” in Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, 1994, pp. 15–22.
- S. Ha, J. Kim, and K. Yamane, “Automated deep reinforcement learning environment for hardware of a modular legged robot,” in 2018 15th international conference on ubiquitous robots (UR). IEEE, 2018, pp. 348–354.
- M. Azumi, K. Ayaka, Y. Hironori, H. Jun, N. Jason, and S. Shunta, “Improvised robotic design with found objects,” in Proc. 3rd Conf. NeurIPS Workshop Mach. Learn. Creativity Des., 2018.
- J. Bongard, V. Zykov, and H. Lipson, “Resilient machines through continuous self-modeling,” Science, vol. 314, no. 5802, pp. 1118–1121, 2006.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, et al., “Isaac gym: High performance gpu-based physics simulation for robot learning,” arXiv preprint arXiv:2108.10470, 2021.
- E. Todorov, T. Erez, and Y. Tassa, “MuJoCo: A physics engine for model-based control,” in Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026–5033.