Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DittoGym: Learning to Control Soft Shape-Shifting Robots (2401.13231v2)

Published 24 Jan 2024 in cs.RO and cs.LG

Abstract: Robot co-design, where the morphology of a robot is optimized jointly with a learned policy to solve a specific task, is an emerging area of research. It holds particular promise for soft robots, which are amenable to novel manufacturing techniques that can realize learned morphologies and actuators. Inspired by nature and recent novel robot designs, we propose to go a step further and explore the novel reconfigurable robots, defined as robots that can change their morphology within their lifetime. We formalize control of reconfigurable soft robots as a high-dimensional reinforcement learning (RL) problem. We unify morphology change, locomotion, and environment interaction in the same action space, and introduce an appropriate, coarse-to-fine curriculum that enables us to discover policies that accomplish fine-grained control of the resulting robots. We also introduce DittoGym, a comprehensive RL benchmark for reconfigurable soft robots that require fine-grained morphology changes to accomplish the tasks. Finally, we evaluate our proposed coarse-to-fine algorithm on DittoGym and demonstrate robots that learn to change their morphology several times within a sequence, uniquely enabled by our RL algorithm. More results are available at https://dittogym.github.io.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Evolution gym: A large-scale benchmark for evolving soft robots. Advances in Neural Information Processing Systems, 34:2201–2214, 2021.
  2. Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding. ACM SIGEVOlution, 7(1):11–23, 2014.
  3. Rl22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
  4. An adaptive generalized interpolation material point method for simulating elastoplastic materials. ACM Transactions on Graphics (TOG), 36(6):1–12, 2017.
  5. Gradientless descent: High-dimensional zeroth-order optimization. arXiv preprint arXiv:1911.06317, 2019.
  6. Embodied intelligence via learning and evolution. Nature communications, 12(1):5721, 2021.
  7. Metamorph: Learning universal controllers with transformers. arXiv preprint arXiv:2203.11931, 2022.
  8. David Ha. Reinforcement learning for improving agent design. Artificial life, 25(4):352–365, 2019.
  9. Fit2form: 3d generative model for robot gripper form design. In Conference on Robot Learning, pp.  176–187. PMLR, 2021.
  10. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp.  1861–1870. PMLR, 2018.
  11. A hybrid material point method for frictional contact with diverse materials. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2(2):1–24, 2019.
  12. Checkpoints in the life-cycle of cassiopea spp.: control of metagenesis and metamorphosis in a tropical jellyfish. International Journal of Developmental Biology, 40(1):331–338, 2003.
  13. Glso: Grammar-guided latent space optimization for sample-efficient robot design automation. In Conference on Robot Learning, pp.  1321–1331. PMLR, 2023.
  14. A moving least squares material point method with displacement discontinuity and two-way rigid body coupling. ACM Transactions on Graphics (TOG), 37(4):1–14, 2018.
  15. Taichi: a language for high-performance computation on spatially sparse data structures. ACM Transactions on Graphics (TOG), 38(6):1–16, 2019a.
  16. Chainqueen: A real-time differentiable physical simulator for soft robotics. In 2019 International conference on robotics and automation (ICRA), pp.  6265–6271. IEEE, 2019b.
  17. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. arXiv preprint arXiv:2104.03311, 2021.
  18. The affine particle-in-cell method. ACM Transactions on Graphics (TOG), 34(4):1–10, 2015.
  19. The material point method for simulating continuum materials. In Acm siggraph 2016 courses, pp.  1–52. 2016.
  20. Soft robot review. International Journal of Control, Automation and Systems, 15:3–15, 2017.
  21. Yuxi Li. Deep reinforcement learning: An overview. arXiv preprint arXiv:1701.07274, 2017.
  22. Ordinary state-based peridynamics for plastic deformation according to von mises yield criteria with isotropic hardening. Journal of the Mechanics and Physics of Solids, 86:192–219, 2016.
  23. Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv:1705.05035, 2017.
  24. Alessandro Minelli. The development of animal form: ontogeny, morphology, and evolution. Cambridge University Press, 2003.
  25. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  26. Engineering magnetic soft and reconfigurable robots. Soft Robotics, 2023.
  27. Learning to control self-assembling morphologies: a study of generalization via modularity. Advances in Neural Information Processing Systems, 32, 2019.
  28. Rudolf A Raff. The shape of life: genes, development, and the evolution of animal form. University of Chicago Press, 2012.
  29. Stable baselines3, 2019.
  30. A material point method for viscoelastic fluids, foams and sponges. In Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp.  157–163, 2015.
  31. Christopher S Rose. Integrating ecology and developmental biology to explain the timing of frog metamorphosis. Trends in Ecology & Evolution, 20(3):129–135, 2005.
  32. L Rothenburg and APS Selvadurai. A micromechanical definition of the cauchy stress tensor for particulate media. Mechanics of structured media, pp.  469–486, 1981.
  33. An overview of research on eulerian–lagrangian localized adjoint methods (ellam). Advances in Water resources, 25(8-12):1215–1231, 2002.
  34. A material point method for snow simulation. ACM Transactions on Graphics (TOG), 32(4):1–10, 2013.
  35. Reconfigurable magnetic slime robot: deformation, adaptability, and multifunction. Advanced Functional Materials, 32(26):2112508, 2022.
  36. # exploration: A study of count-based exploration for deep reinforcement learning. Advances in neural information processing systems, 30, 2017.
  37. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pp.  5026–5033. IEEE, 2012.
  38. Softzoo: A soft robot co-design benchmark for locomotion in diverse environments. arXiv preprint arXiv:2303.09555, 2023a.
  39. Diffusebot: Breeding soft robots with physics-augmented generative diffusion models. arXiv preprint arXiv:2311.17053, 2023b.
  40. Curriculum-based co-design of morphology and control of voxel-based soft robots. In The Eleventh International Conference on Learning Representations.
  41. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning, pp.  1094–1100. PMLR, 2020.
  42. Transform2act: Learning a transform-and-control policy for efficient agent design. arXiv preprint arXiv:2110.03659, 2021.
  43. Robogrammar: graph grammar for terrain-optimized robot design. ACM Transactions on Graphics (TOG), 39(6):1–16, 2020.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com