Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Efficient Learning Control Framework With Sim-to-Real for String-Type Artificial Muscle-Driven Robotic Systems (2405.10576v2)

Published 17 May 2024 in cs.RO

Abstract: Robotic systems driven by artificial muscles present unique challenges due to the nonlinear dynamics of actuators and the complex designs of mechanical structures. Traditional model-based controllers often struggle to achieve desired control performance in such systems. Deep reinforcement learning (DRL), a trending machine learning technique widely adopted in robot control, offers a promising alternative. However, integrating DRL into these robotic systems faces significant challenges, including the requirement for large amounts of training data and the inevitable sim-to-real gap when deployed to real-world robots. This paper proposes an efficient reinforcement learning control framework with sim-to-real transfer to address these challenges. Bootstrap and augmentation enhancements are designed to improve the data efficiency of baseline DRL algorithms, while a sim-to-real transfer technique, namely randomization of muscle dynamics, is adopted to bridge the gap between simulation and real-world deployment. Extensive experiments and ablation studies are conducted utilizing two string-type artificial muscle-driven robotic systems including a two degree-of-freedom robotic eye and a parallel robotic wrist, the results of which demonstrate the effectiveness of the proposed learning control strategy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. J. Zhang et al., “Robotic artificial muscles: Current progress and future perspectives,” IEEE Trans. Robot., vol. 35, no. 3, pp. 761–781, 2019.
  2. S. Y. Yang et al., “Hybrid antagonistic system with coiled shape memory alloy and twisted and coiled polymer actuator for lightweight robotic arm,” IEEE Robot. Automat. Lett., vol. 7, no. 2, pp. 4496–4503, 2022.
  3. R. J. Wood, “The first takeoff of a biologically inspired at-scale robotic insect,” IEEE Trans. Robot., vol. 24, no. 2, pp. 341–347, 2008.
  4. Z. Chen, S. Shatara, and X. Tan, “Modeling of biomimetic robotic fish propelled by an ionic polymer–metal composite caudal fin,” IEEE/ASME Trans. Mechatron., vol. 15, no. 3, pp. 448–459, 2009.
  5. S. Liu, F. Wang, Z. Liu, W. Zhang, Y. Tian, and D. Zhang, “A two-finger soft-robotic gripper with enveloping and pinching grasping modes,” IEEE/ASME Trans. Mechatron., vol. 26, no. 1, pp. 146–155, 2020.
  6. N. Nikdel, P. Nikdel, M. A. Badamchizadeh, and I. Hassanzadeh, “Using neural network model predictive control for controlling shape memory alloy-based manipulator,” IEEE Trans. Ind. Electron., vol. 61, no. 3, pp. 1394–1401, 2013.
  7. S. Mao et al., “Gait study and pattern generation of a starfish-like soft robot with flexible rays actuated by smas,” J. Bionic Eng., vol. 11, no. 3, pp. 400–411, 2014.
  8. D. Hua, X. Liu, S. Sun, M. A. Sotelo, Z. Li, and W. Li, “A magnetorheological fluid-filled soft crawling robot with magnetic actuation,” IEEE/ASME Trans. Mechatron., vol. 25, no. 6, pp. 2700–2710, 2020.
  9. G. Gerboni, A. Diodato, G. Ciuti, M. Cianchetti, and A. Menciassi, “Feedback control of soft robot actuators via commercial flex bend sensors,” IEEE/ASME Trans. Mechatron., vol. 22, no. 4, pp. 1881–1888, 2017.
  10. A. Firouzeh, M. Salerno, and J. Paik, “Stiffness control with shape memory polymer in underactuated robotic origamis,” IEEE Trans. Robot., vol. 33, no. 4, pp. 765–777, 2017.
  11. Y. S. Song and M. Sitti, “Surface-tension-driven biologically inspired water strider robots: Theory and experiments,” IEEE Trans. Robot., vol. 23, no. 3, pp. 578–589, 2007.
  12. K. H. Cho et al., “A robotic finger driven by twisted and coiled polymer actuator,” in Proc. SPIE, 2016, pp. 275–281.
  13. R. Konda, D. Bombara, E. Chow, and J. Zhang, “Kinematic modeling and open-loop control of a twisted string actuator-driven soft robotic manipulator,” J. Mech. Robot., vol. 16, no. 4, 2024.
  14. R. Konda, D. Bombara, S. Swanbeck, and J. Zhang, “Anthropomorphic twisted string-actuated soft robotic gripper with tendon-based stiffening,” IEEE Trans. Robot., vol. 39, no. 2, pp. 1178–1195, 2022.
  15. L. Sutton, H. Moein, A. Rafiee, J. D. Madden, and C. Menon, “Design of an assistive wrist orthosis using conductive nylon actuators,” in Proc. IEEE Int. Conf. Biomed. Robot. Biomechatronics, 2016, pp. 1074–1079.
  16. D. Popov, I. Gaponov, and J.-H. Ryu, “Bidirectional elbow exoskeleton based on twisted-string actuators,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2013, pp. 5853–5858.
  17. J. Ko, M. B. Jun, G. Gilardi, E. Haslam, and E. J. Park, “Fuzzy pwm-pid control of cocontracting antagonistic shape memory alloy muscle pairs in an artificial finger,” Mechatronics, vol. 21, no. 7, pp. 1190–1202, 2011.
  18. M. Giorelli, F. Renda, M. Calisti, A. Arienti, G. Ferri, and C. Laschi, “Neural network and jacobian method for solving the inverse statics of a cable-driven soft arm with nonconstant curvature,” IEEE Trans. Robot., vol. 31, no. 4, pp. 823–834, 2015.
  19. T. Yang et al., “A soft artificial muscle driven robot with reinforcement learning,” Sci. Rep., vol. 8, no. 1, p. 14518, 2018.
  20. C. Schlagenhauf et al., “Control of tendon-driven soft foam robot hands,” in Proc. IEEE-RAS Int. Conf. Humanoid Robots, 2018, pp. 1–7.
  21. T. G. Thuruthel, E. Falotico, F. Renda, and C. Laschi, “Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators,” IEEE Trans. Robot., vol. 35, no. 1, pp. 124–134, 2018.
  22. X. You et al., “Model-free control for soft manipulators based on reinforcement learning,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2017, pp. 2909–2915.
  23. S. K. Rajendran, Q. Wei, N. Yao, and F. Zhang, “Design, implementation, and observer-based output control of a super-coiled polymer-driven two degree-of-freedom robotic eye,” IEEE Robot. Automat. Lett., vol. 8, no. 9, pp. 5958–5965, 2023.
  24. J. Tao, S. K. Rajendran, Y. Zhang, F. Zhang, Z. Dexin, and T. Shen, “Efficient learning and control of string-type artificial muscle driven robotic systems,” in Proc. Amer. Control Conf., 2024, (Accepted).
  25. J. I. Kim, M. Hong, K. Lee, D. Kim, Y.-L. Park, and S. Oh, “Learning to walk a tripod mobile robot using nonlinear soft vibration actuators with entropy adaptive reinforcement learning,” IEEE Robot. Automa. Lett., vol. 5, no. 2, pp. 2317–2324, 2020.
  26. S. K. Rajendran and F. Zhang, “Design, modeling, and visual learning-based control of soft robotic fish driven by super-coiled polymers,” Front. Robot. AI, vol. 8, p. 809427, 2022.
  27. J. Zhang, K. Iyer, A. Simeonov, and M. C. Yip, “Modeling and inverse compensation of hysteresis in supercoiled polymer artificial muscles,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 773–780, 2017.
  28. B. Pawlowski, J. Sun, J. Xu, Y. Liu, and J. Zhao, “Modeling of soft robots actuated by twisted-and-coiled actuators,” IEEE/ASME Trans. Mechatron., vol. 24, no. 1, pp. 5–15, 2018.
  29. M. C. Yip and G. Niemeyer, “On the control and properties of supercoiled polymer artificial muscles,” IEEE Trans. Robot., vol. 33, no. 3, pp. 689–699, 2017.
  30. H. Zhang et al., “Design and modeling of a compound twisted and coiled actuator based on spandex fibers and an sma skeleton,” IEEE Robot. Automat. Lett., vol. 7, no. 2, pp. 1439–1446, 2021.
  31. X. Wang et al., “Deep reinforcement learning: a survey,” IEEE Trans. Neural Netw. Learn. Syst., 2022.
  32. T. Zhang, R. Tian, H. Yang, C. Wang, J. Sun, S. Zhang, and G. Xie, “From simulation to reality: A learning framework for fish-like robots to perform control tasks,” IEEE Trans. on Robot., vol. 38, no. 6, pp. 3861–3878, 2022.
  33. T. Luong et al., “Impedance control of a high performance twisted-coiled polymer actuator,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 8701–8706.
  34. J. Sun and J. Zhao, “Physics-based modeling of twisted-and-coiled actuators using cosserat rod theory,” IEEE Trans. Robot., vol. 38, no. 2, pp. 779–796, 2021.
  35. F. Karami, L. Wu, and Y. Tadesse, “Modeling of one-ply and two-ply twisted and coiled polymer artificial muscles,” IEEE/ASME Trans. Mechatron., vol. 26, no. 1, pp. 300–310, 2020.
  36. S. K. Rajendran, Q. Wei, and F. Zhang, “Two degree-of-freedom robotic eye: design, modeling, and learning-based control in foveation and smooth pursuit,” Bioinspiration & Biomimetics, vol. 16, no. 4, p. 046022, 2021.
  37. C. S. Haines et al., “Artificial muscles from fishing line and sewing thread,” Science, vol. 343, no. 6173, pp. 868–872, 2014.
  38. H. Jiang, C. Xiao, J. Li, Y. Zhong, T. Zhang, and Y. Guan, “Design and modeling of a 2-dof cable-driven parallel wrist mechanism,” in Proc. IEEE Int. Conf. Robot. Biomimetics, 2019, pp. 1047–1052.
  39. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 1861–1870.
  40. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv:1412.3555, 2014.
  41. X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel, “Sim-to-real transfer of robotic control with dynamics randomization,” in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 3803–3810.
  42. R. Jitosho, T. G. W. Lum, A. Okamura, and K. Liu, “Reinforcement learning enables real-time planning and control of agile maneuvers for soft robot arms,” in Proc. Conf. Robot Learn., 2023, pp. 1131–1153.
  43. OpenAI et al., “Learning dexterous in-hand manipulation,” Int. J. Robot. Res., vol. 39, no. 1, pp. 3–20, 2020.
  44. B. Zheng, S. Verma, J. Zhou, I. W. Tsang, and F. Chen, “Imitation learning: Progress, taxonomies and challenges,” IEEE Trans. Neural Netw. Learn. Syst., no. 99, pp. 1–16, 2022.
  45. K. Wang, H. Zhao, X. Luo, K. Ren, W. Zhang, and D. Li, “Bootstrapped transformer for offline reinforcement learning,” 2022, pp. 34 748–34 761.
  46. C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. Big Data, vol. 6, no. 1, pp. 1–48, 2019.
  47. M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, “Reinforcement learning with augmented data,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 19 884–19 895.
  48. M. Laskin, A. Srinivas, and P. Abbeel, “Curl: Contrastive unsupervised representations for reinforcement learning,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 5639–5650.
  49. K. Lee, K. Lee, J. Shin, and H. Lee, “Network randomization: A simple technique for generalization in deep reinforcement learning,” arXiv:1910.05396, 2019.
  50. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jiyue Tao (2 papers)
  2. Yunsong Zhang (4 papers)
  3. Sunil Kumar Rajendran (1 paper)
  4. Feitian Zhang (16 papers)
  5. Dexin Zhao (5 papers)
  6. Tongsheng Shen (4 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com