Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 32 tok/s Pro
2000 character limit reached

Conditional Variational Auto Encoder Based Dynamic Motion for Multi-task Imitation Learning (2405.15266v1)

Published 24 May 2024 in cs.RO

Abstract: The dynamic motion primitive-based (DMP) method is an effective method of learning from demonstrations. However, most of the current DMP-based methods focus on learning one task with one module. Although, some deep learning-based frameworks can learn to multi-task at the same time. However, those methods require a large number of training data and have limited generalization of the learned behavior to the untrained state. In this paper, we propose a framework that combines the advantages of the traditional DMP-based method and conditional variational auto-encoder (CVAE). The encoder and decoder are made of a dynamic system and deep neural network. Deep neural networks are used to generate torque conditioned on the task ID. Then, this torque is used to create the desired trajectory in the dynamic system based on the final state. In this way, the generated tractory can adjust to the new goal position. We also propose a finetune method to guarantee the via-point constraint. Our model is trained on the handwriting number dataset and can be used to solve robotic tasks -- reaching and pushing directly. The proposed model is validated in the simulation environment. The results show that after training on the handwriting number dataset, it achieves a 100\% success rate on pushing and reaching tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, “Dynamical movement primitives: Learning attractor models for motor behaviors,” Neural Computation, vol. 25, no. 2, pp. 328–373, 2013.
  2. H. Nguyen and H. La, “Review of deep reinforcement learning for robot manipulation,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), pp. 590–595, 2019.
  3. V. G. Goecks, G. M. Gremillion, V. J. Lawhern, J. Valasek, and N. R. Waytowich, “Integrating behavior cloning and reinforcement learning for improved performance in dense and sparse reward environments,” 2020.
  4. T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5628–5635, 2018.
  5. A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann, “Probabilistic movement primitives,” in Advances in Neural Information Processing Systems (C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, eds.), vol. 26, Curran Associates, Inc., 2013.
  6. F. Frank, A. Paraschos, P. van der Smagt, and B. Cseke, “Constrained probabilistic movement primitives for robot trajectory adaptation,” IEEE Transactions on Robotics, vol. 38, pp. 2276–2294, aug 2022.
  7. M. Przystupa, F. Haghverd, M. Jagersand, and S. Tosatto, “Deep probabilistic movement primitives with a bayesian aggregator,” 2023.
  8. N. Chen, M. Karl, and P. van der Smagt, “Dynamic movement primitives in latent space of time-dependent variational autoencoders,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 629–636, 2016.
  9. N. Chen, J. Bayer, S. Urban, and P. van der Smagt, “Efficient movement representation by embedding dynamic movement primitives in deep autoencoders,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 434–440, 2015.
  10. M. Noseworthy, R. Paul, S. Roy, D. Park, and N. Roy, “Task-conditioned variational autoencoders for learning movement primitives,” in Proceedings of the Conference on Robot Learning (L. P. Kaelbling, D. Kragic, and K. Sugiura, eds.), vol. 100 of Proceedings of Machine Learning Research, pp. 933–944, PMLR, 30 Oct–01 Nov 2020.
  11. D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” Foundations and Trends® in Machine Learning, vol. 12, no. 4, pp. 307–392, 2019.
  12. C. Doersch, “Tutorial on variational autoencoders,” 2021.
  13. E. Coumans and Y. Bai, “Pybullet, a python module for physics simulation for games, robotics and machine learning.” http://pybullet.org, 2016–2021.
  14. R. de Lazcano, K. Andreas, J. J. Tai, S. R. Lee, and J. Terry, “Gymnasium robotics,” 2023.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com