Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning (2410.24185v2)

Published 31 Oct 2024 in cs.RO, cs.AI, cs.CV, and cs.LG

Abstract: Imitation learning from human demonstrations is an effective means to teach robots manipulation skills. But data acquisition is a major bottleneck in applying this paradigm more broadly, due to the amount of cost and human effort involved. There has been significant interest in imitation learning for bimanual dexterous robots, like humanoids. Unfortunately, data collection is even more challenging here due to the challenges of simultaneously controlling multiple arms and multi-fingered hands. Automated data generation in simulation is a compelling, scalable alternative to fuel this need for data. To this end, we introduce DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a handful of human demonstrations for humanoid robots with dexterous hands. We present a collection of simulation environments in the setting of bimanual dexterous manipulation, spanning a range of manipulation behaviors and different requirements for coordination among the two arms. We generate 21K demos across these tasks from just 60 source human demos and study the effect of several data generation and policy learning decisions on agent performance. Finally, we present a real-to-sim-to-real pipeline and deploy it on a real-world humanoid can sorting task. Generated datasets, simulation environments and additional results are at https://dexmimicgen.github.io/

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese, Y. Zhu, and R. Martín-Martín, “What matters in learning from offline human demonstrations for robot manipulation,” in Conference on Robot Learning (CoRL), 2021.
  2. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al., “Rt-1: Robotics transformer for real-world control at scale,” arXiv preprint arXiv:2212.06817, 2022.
  3. T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in 2018 IEEE international conference on robotics and automation (ICRA).   IEEE, 2018, pp. 5628–5635.
  4. A. Mandlekar, Y. Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay, S. Savarese, and L. Fei-Fei, “RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation,” in Conference on Robot Learning, 2018.
  5. F. Ebert, Y. Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine, “Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets,” in Proceedings of Robotics: Science and Systems, New York City, NY, USA, 6 2022.
  6. A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al., “Do as i can, not as i say: Grounding language in robotic affordances,” in Conference on Robot Learning.   PMLR, 2023, pp. 287–318.
  7. E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn, “Bc-z: Zero-shot task generalization with robotic imitation learning,” in Conference on Robot Learning.   PMLR, 2022, pp. 991–1002.
  8. C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence, “Interactive language: Talking to robots in real time,” IEEE Robotics and Automation Letters, 2023.
  9. K. Darvish, L. Penco, J. Ramos, R. Cisneros, J. Pratt, E. Yoshida, S. Ivaldi, and D. Pucci, “Teleoperation of humanoid robots: A survey,” IEEE Transactions on Robotics, vol. 39, no. 3, pp. 1706–1727, 2023.
  10. R. Ding, Y. Qin, J. Zhu, C. Jia, S. Yang, R. Yang, X. Qi, and X. Wang, “Bunny-visionpro: Real-time bimanual dexterous teleoperation for imitation learning,” arXiv preprint arXiv:2407.03162, 2024.
  11. X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang, “Open-television: teleoperation with immersive active visual feedback,” arXiv preprint arXiv:2407.01512, 2024.
  12. T. He, Z. Luo, W. Xiao, C. Zhang, K. Kitani, C. Liu, and G. Shi, “Learning human-to-humanoid real-time whole-body teleoperation,” arXiv preprint arXiv:2403.04436, 2024.
  13. T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang, K. Kitani, C. Liu, and G. Shi, “Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning,” arXiv preprint arXiv:2406.08858, 2024.
  14. Z. Fu, Q. Zhao, Q. Wu, G. Wetzstein, and C. Finn, “Humanplus: Humanoid shadowing and imitation from humans,” arXiv preprint arXiv:2406.10454, 2024.
  15. M. Dalal, A. Mandlekar, C. R. Garrett, A. Handa, R. Salakhutdinov, and D. Fox, “Imitating task and motion planning with visuomotor transformers,” in Conference on Robot Learning.   PMLR, 2023, pp. 2565–2593.
  16. Y. Wang, Z. Xian, F. Chen, T.-H. Wang, Y. Wang, K. Fragkiadaki, Z. Erickson, D. Held, and C. Gan, “Robogen: Towards unleashing infinite data for automated robot learning via generative simulation,” in Forty-first International Conference on Machine Learning, 2023.
  17. A. Mandlekar, S. Nasiriany, B. Wen, I. Akinola, Y. Narang, L. Fan, Y. Zhu, and D. Fox, “Mimicgen: A data generation system for scalable robot learning using human demonstrations,” in Conference on Robot Learning.   PMLR, 2023, pp. 1820–1864.
  18. A. Mandlekar, J. Booher, M. Spero, A. Tung, A. Gupta, Y. Zhu, A. Garg, S. Savarese, and L. Fei-Fei, “Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulation dataset through human reasoning and dexterity,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 1048–1055.
  19. A. Mandlekar, D. Xu, R. Martín-Martín, Y. Zhu, L. Fei-Fei, and S. Savarese, “Human-in-the-loop imitation learning using remote teleoperation,” arXiv preprint arXiv:2012.06733, 2020.
  20. J. Wong, A. Tung, A. Kurenkov, A. Mandlekar, L. Fei-Fei, S. Savarese, and R. Martín-Martín, “Error-aware imitation learning from teleoperation data for mobile manipulation,” in Conference on Robot Learning.   PMLR, 2022, pp. 1367–1378.
  21. P. Wu, Y. Shentu, Z. Yi, X. Lin, and P. Abbeel, “Gello: A general, low-cost, and intuitive teleoperation framework for robot manipulators,” 2023.
  22. A. Iyer, Z. Peng, Y. Dai, I. Guzey, S. Haldar, S. Chintala, and L. Pinto, “Open teach: A versatile teleoperation system for robotic manipulation,” arXiv preprint arXiv:2403.07870, 2024.
  23. S. Dass, W. Ai, Y. Jiang, S. Singh, J. Hu, R. Zhang, P. Stone, B. Abbatematteo, and R. Martín-Martín, “Telemoma: A modular and versatile teleoperation system for mobile manipulation,” in 2nd Workshop on Mobile Manipulation and Embodied Intelligence at ICRA 2024, 2024.
  24. A. Tung, J. Wong, A. Mandlekar, R. Martín-Martín, Y. Zhu, L. Fei-Fei, and S. Savarese, “Learning multi-arm manipulation through collaborative teleoperation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 9212–9219.
  25. T. Z. Zhao, V. Kumar, S. Levine, and C. Finn, “Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware,” in Proceedings of Robotics: Science and Systems, Daegu, Republic of Korea, 7 2023.
  26. J. Aldaco, T. Armstrong, R. Baruch, J. Bingham, S. Chan, K. Draper, D. Dwibedi, C. Finn, P. Florence, S. Goodrich, et al., “Aloha 2: An enhanced low-cost hardware for bimanual teleoperation,” arXiv preprint arXiv:2405.02292, 2024.
  27. T. Lin, Y. Zhang, Q. Li, H. Qi, B. Yi, S. Levine, and J. Malik, “Learning visuotactile skills with two multifingered hands,” arXiv preprint arXiv:2404.16823, 2024.
  28. S. Schaal, “Is imitation learning the route to humanoid robots?” Trends in cognitive sciences, vol. 3, no. 6, pp. 233–242, 1999.
  29. H. Fang, H.-S. Fang, Y. Wang, J. Ren, J. Chen, R. Zhang, W. Wang, and C. Lu, “Low-cost exoskeletons for learning whole-arm manipulation in the wild,” in Towards Generalist Robots: Learning Paradigms for Scalable Skill Acquisition@ CoRL2023, 2023.
  30. C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song, “Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots,” in Proceedings of Robotics: Science and Systems (RSS), 2024.
  31. H. Etukuru, N. Naka, Z. Hu, S. Lee, J. Mehu, A. Edsinger, C. Paxton, S. Chintala, L. Pinto, and N. M. M. Shafiullah, “Robot utility models: General policies for zero-shot deployment in new environments,” 2024.
  32. S. James, Z. Ma, D. R. Arrojo, and A. J. Davison, “Rlbench: The robot learning benchmark & learning environment,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3019–3026, 2020.
  33. A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin, D. Duong, V. Sindhwani, et al., “Transporter networks: Rearranging the visual world for robotic manipulation,” in Conference on Robot Learning.   PMLR, 2021, pp. 726–747.
  34. Y. Jiang, A. Gupta, Z. Zhang, G. Wang, Y. Dou, Y. Chen, L. Fei-Fei, A. Anandkumar, Y. Zhu, and L. Fan, “Vima: General robot manipulation with multimodal prompts,” in Fortieth International Conference on Machine Learning, 2023.
  35. J. Gu, F. Xiang, X. Li, Z. Ling, X. Liu, T. Mu, Y. Tang, S. Tao, X. Wei, Y. Yao, et al., “Maniskill2: A unified benchmark for generalizable manipulation skills,” in The Eleventh International Conference on Learning Representations, 2023.
  36. H. Ha, P. Florence, and S. Song, “Scaling up and distilling down: Language-guided robot skill acquisition,” in Conference on Robot Learning.   PMLR, 2023, pp. 3766–3777.
  37. R. Hoque, A. Mandlekar, C. R. Garrett, K. Goldberg, and D. Fox, “Interventional data generation for robust and data-efficient robot imitation learning,” in First Workshop on Out-of-Distribution Generalization in Robotics at CoRL 2023, 2023. [Online]. Available: https://openreview.net/forum?id=ckFRoOaA3n
  38. S. Nasiriany, A. Maddukuri, L. Zhang, A. Parikh, A. Lo, A. Joshi, A. Mandlekar, and Y. Zhu, “Robocasa: Large-scale simulation of everyday tasks for generalist robots,” in Robotics: Science and Systems (RSS), 2024.
  39. D. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” in Advances in neural information processing systems, 1989, pp. 305–313.
  40. C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning via meta-learning,” in Conference on robot learning.   PMLR, 2017, pp. 357–368.
  41. A. Billard, S. Calinon, R. Dillmann, and S. Schaal, “Robot programming by demonstration,” in Springer Handbook of Robotics, 2008.
  42. S. Calinon, F. D’halluin, E. L. Sauser, D. G. Caldwell, and A. Billard, “Learning and reproduction of gestures by imitation,” IEEE Robotics and Automation Magazine, vol. 17, pp. 44–54, 2010.
  43. A. Mandlekar, D. Xu, R. Martín-Martín, S. Savarese, and L. Fei-Fei, “GTI: Learning to Generalize across Long-Horizon Tasks from Human Demonstrations,” in Proceedings of Robotics: Science and Systems, Corvalis, Oregon, USA, 7 2020.
  44. C. Wang, R. Wang, A. Mandlekar, L. Fei-Fei, S. Savarese, and D. Xu, “Generalization through hand-eye coordination: An action space for learning spatially-invariant visuomotor control,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 8913–8920.
  45. C. Lynch, M. Khansari, T. Xiao, V. Kumar, J. Tompson, S. Levine, and P. Sermanet, “Learning latent plans from play,” in Conference on Robot Learning, 2019.
  46. K. Pertsch, Y. Lee, Y. Wu, and J. J. Lim, “Demonstration-guided reinforcement learning with learned skills,” in Conference on Robot Learning, 2021.
  47. A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum, “Opal: Offline primitive discovery for accelerating offline reinforcement learning,” in International Conference on Learning Representations, 2021.
  48. K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin, “Hierarchical few-shot imitation with skill transition models,” in International Conference on Learning Representations, 2021.
  49. Y. Zhu, P. Stone, and Y. Zhu, “Bottom-up skill discovery from unsegmented demonstrations for long-horizon robot manipulation,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4126–4133, 2022.
  50. S. Nasiriany, T. Gao, A. Mandlekar, and Y. Zhu, “Learning and retrieval from prior data for skill-based imitation learning,” in Conference on Robot Learning (CoRL), 2022.
  51. M. Drolet, S. Stepputtis, S. Kailas, A. Jain, J. Peters, S. Schaal, and H. Ben Amor, “A comparison of imitation learning algorithms for bimanual manipulation,” IEEE Robotics and Automation Letters (RA-L), 2024.
  52. A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlinear dynamical systems in humanoid robots,” Proceedings 2002 IEEE International Conference on Robotics and Automation, vol. 2, pp. 1398–1403 vol.2, 2002.
  53. M. Seo, S. Han, K. Sim, S. H. Bang, C. Gonzalez, L. Sentis, and Y. Zhu, “Deep imitation learning for humanoid loco-manipulation through human teleoperation,” in 2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids).   IEEE, 2023, pp. 1–8.
  54. C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song, “Diffusion policy: Visuomotor policy learning via action diffusion,” in Proceedings of Robotics: Science and Systems (RSS), 2023.
  55. P. Mitrano and D. Berenson, “Data Augmentation for Manipulation,” in Proceedings of Robotics: Science and Systems, New York City, NY, USA, 6 2022.
  56. M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, “Reinforcement learning with augmented data,” Advances in neural information processing systems, vol. 33, pp. 19 884–19 895, 2020.
  57. D. Yarats, I. Kostrikov, and R. Fergus, “Image augmentation is all you need: Regularizing deep reinforcement learning from pixels,” in International conference on learning representations, 2021.
  58. S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto, “Visual imitation made easy,” arXiv e-prints, pp. arXiv–2008, 2020.
  59. A. Zhan, R. Zhao, L. Pinto, P. Abbeel, and M. Laskin, “A framework for efficient robotic manipulation,” in Deep RL Workshop NeurIPS 2021, 2021.
  60. S. Sinha, A. Mandlekar, and A. Garg, “S4rl: Surprisingly simple self-supervision for offline reinforcement learning in robotics,” in Conference on Robot Learning.   PMLR, 2022, pp. 907–917.
  61. S. Pitis, E. Creager, and A. Garg, “Counterfactual data augmentation using locally factored dynamics,” Advances in Neural Information Processing Systems, vol. 33, pp. 3976–3990, 2020.
  62. S. Pitis, E. Creager, A. Mandlekar, and A. Garg, “Mocoda: model-based counterfactual data augmentation,” in Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022, pp. 18 143–18 156.
  63. Z. Mandi, H. Bharadhwaj, V. Moens, S. Song, A. Rajeswaran, and V. Kumar, “Cacti: A framework for scalable multi-task multi-scene visual imitation learning,” in CoRL 2022 Workshop on Pre-training Robot Learning, 2022.
  64. T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter, et al., “Scaling robot learning with semantically imagined experience,” arXiv preprint arXiv:2302.11550, 2023.
  65. Z. Chen, S. Kiami, A. Gupta, and V. Kumar, “Genaug: Retargeting behaviors to unseen situations via generative augmentation,” arXiv preprint arXiv:2302.06671, 2023.
  66. H. Bharadhwaj, J. Vakil, M. Sharma, A. Gupta, S. Tulsiani, and V. Kumar, “Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking,” in First Workshop on Out-of-Distribution Generalization in Robotics at CoRL 2023, 2023.
  67. X. Zhang, M. Chang, P. Kumar, and S. Gupta, “Diffusion meets dagger: Supercharging eye-in-hand imitation learning,” arXiv preprint arXiv:2402.17768, 2024.
  68. S. Tian, B. Wulfe, K. Sargent, K. Liu, S. Zakharov, V. Guizilini, and J. Wu, “View-invariant policy learning via zero-shot novel view synthesis,” in Conference on Robot Learning (CoRL), Munich, Germany, 2024.
  69. L. Y. Chen, C. Xu, K. Dharmarajan, M. Z. Irshad, R. Cheng, K. Keutzer, M. Tomizuka, Q. Vuong, and K. Goldberg, “Rovi-aug: Robot and viewpoint augmentation for cross-embodiment robot learning,” in Conference on Robot Learning (CoRL), Munich, Germany, 2024.
  70. Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín, “robosuite: A modular simulation framework and benchmark for robot learning,” in arXiv preprint arXiv:2009.12293, 2020.
  71. E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026–5033.
  72. O. Khatib, “A unified approach for motion and force control of robot manipulators: The operational space formulation,” IEEE Journal on Robotics and Automation, vol. 3, no. 1, pp. 43–53, 1987.
  73. K. Zakka, “mink,” 2024. [Online]. Available: https://github.com/kevinzakka/mink
  74. S. Caron, Y. De Mont-Marin, R. Budhiraja, S. H. Bang, I. Domrachev, and S. Nedelchev, “Pink: Python inverse kinematics based on Pinocchio,” 2024. [Online]. Available: https://github.com/stephane-caron/pink
  75. Y. Park and P. Agrawal, “Using apple vision pro to train and control robots,” 2024. [Online]. Available: https://github.com/Improbable-AI/VisionProTeleop
  76. N. Chernyadev, N. Backshall, X. Ma, Y. Lu, Y. Seo, and S. James, “Bigym: A demo-driven mobile bi-manual manipulation benchmark,” arXiv preprint arXiv:2407.07788, 2024.
  77. Z. Jiang, C.-C. Hsu, and Y. Zhu, “Ditto: Building digital twins of articulated objects from interaction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5616–5626.
  78. S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al., “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” arXiv preprint arXiv:2303.05499, 2023.
Citations (2)

Summary

  • The paper introduces an automated system that generates 21,000 demos from only 60 human demonstrations for bimanual tasks.
  • It employs a multi-step simulation environment and a real-to-sim-to-real pipeline to achieve a 90% success rate in humanoid tasks.
  • The method significantly reduces human effort and cost in training dexterous robots, offering a scalable framework for imitation learning research.

An Overview of DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation

The paper "DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning" presents a novel method for generating large-scale training datasets in the domain of robotic manipulation. The primary issue addressed is the high cost and human effort associated with acquiring demonstration data necessary for training bimanual dexterous robots. By automating the data generation process via simulation, DexMimicGen aims to mitigate these constraints.

Key Contributions

DexMimicGen introduces a method that synthesizes trajectories for humanoid robots with dexterous hands from a limited set of human demonstrations. There are several key contributions made by the authors:

  1. Automated Data Generation System: DexMimicGen generates 21,000 demos across diverse tasks using only 60 source human demonstrations. This data generation system leverages an asynchronous per-arm execution strategy, synchronization, and sequential constraints to enable multi-arm coordination.
  2. Simulation Environment Development: A suite of simulation environments was developed, focusing on tasks that require different coordination behaviors between two arms. The environments facilitate the paper of the effects of data generation and policy learning on agent performance.
  3. Real-to-Sim-to-Real Pipeline: A practical implementation of the developed system was demonstrated by deploying it on a real-world humanoid task (can sorting), achieving a 90% success rate, significantly surpassing baseline performances.

Methodology

DexMimicGen extends upon the principles of MimicGen by enabling data generation for bimanual and dexterous manipulation tasks. The system decomposes tasks into a series of object-centric subtasks categorized into three types: parallel, coordination, and sequential. Each type handles specific challenges such as independent arm actions, synchronization for arm coordination, and enforcing operation order where necessary.

  • Parallel Subtasks: Address independent sub-goals for each arm, executed asynchronously.
  • Coordination Subtasks: Require synchronized execution to maintain relative poses, using either transformation or replay strategies to align actions.
  • Sequential Subtasks: Use ordering constraints to ensure correct task progression between arms.

Implications and Future Directions

The implications of DexMimicGen are significant for both practical applications and theoretical research in robotic manipulation. Practically, the method reduces the need for extensive human data collection, thereby lowering costs and entry barriers for training complex humanoid robots. Theoretically, DexMimicGen provides a framework that could enhance understanding of scalable data-generation techniques and multi-agent coordination.

The DexMimicGen dataset and developed environments also open avenues for future research in imitation learning and robotic control. Evaluating how policy architecture choices affect learning outcomes, as shown in the analysis section, highlights areas for further investigation in optimizing learning strategies. Additionally, real-world trials affirm the potential of transferring simulation-trained models to tangible applications, supporting more nuanced development of robotic capabilities in dynamic settings.

Overall, DexMimicGen delineates a method that not only boosts the efficiency of data acquisition for robotic learning but also provides a robust framework for future advancements in the field of dexterous bimanual manipulation, thereby contributing significantly to both applied and theoretical robotics research.