Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 13 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 32 tok/s Pro
2000 character limit reached

From RGB images to Dynamic Movement Primitives for planar tasks (2303.03204v3)

Published 6 Mar 2023 in cs.RO

Abstract: DMP have been extensively applied in various robotic tasks thanks to their generalization and robustness properties. However, the successful execution of a given task may necessitate the use of different motion patterns that take into account not only the initial and target position but also features relating to the overall structure and layout of the scene. To make DMP applicable to a wider range of tasks and further automate their use, we design a framework combining deep residual networks with DMP, that can encapsulate different motion patterns of a planar task, provided through human demonstrations on the RGB image plane. We can then automatically infer from new raw RGB visual input the appropriate DMP parameters, i.e. the weights that determine the motion pattern and the initial/target positions. We compare our method against another SoA method for inferring DMP from images and carry out experimental validations in two different planar tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. S. Schaal, “Is imitation learning the route to humanoid robots?” Trends in Cognitive Sciences, vol. 3, no. 6, pp. 233–242, 1999.
  2. A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal, “Dynamical movement primitives: Learning attractor models for motor behaviors,” Neural Comput., vol. 25, no. 2, pp. 328–373, Feb 2013.
  3. P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 763–768.
  4. A. Ude, A. Gams, T. Asfour, and J. Morimoto, “Task-specific generalization of discrete and periodic dynamic movement primitives,” IEEE Transactions on Robotics, vol. 26, no. 5, pp. 800–815, 2010.
  5. N. Chen, J. Bayer, S. Urban, and P. van der Smagt, “Efficient movement representation by embedding dynamic movement primitives in deep autoencoders,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015, pp. 434–440.
  6. N. Chen, M. Karl, and P. van der Smagt, “Dynamic movement primitives in latent space of time-dependent variational autoencoders,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016, pp. 629–636.
  7. A. Pervez and D. Lee, “Learning task-parameterized dynamic movement primitives using mixture of gmms,” Intelligent Service Robotics, vol. 11, no. 1, pp. 61–78, Jan 2018.
  8. A. Pervez, Y. Mao, and D. Lee, “Learning deep movement primitives using convolutional neural networks,” in 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), 2017, pp. 191–197.
  9. R. Pahič, B. Ridge, A. Gams, J. Morimoto, and A. Ude, “Training of deep neural networks for the generation of dynamic movement primitives,” Neural Networks, vol. 127, pp. 121–131, 2020.
  10. R. Pahič, A. Gams, and A. Ude, “Reconstructing spatial aspects of motion by image-to-path deep neural networks,” IEEE Robotics and Automation Letters, vol. 6, no. 1, pp. 255–262, 2021.
  11. S. Bahl, M. Mukadam, A. Gupta, and D. Pathak, “Neural dynamic policies for end-to-end sensorimotor learning,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS’20.   Red Hook, NY, USA: Curran Associates Inc., 2020.
  12. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 06 2016, pp. 770–778.
  13. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   Los Alamitos, CA, USA: IEEE Computer Society, jun 2015, pp. 3431–3440.
  14. A. Sidiropoulos and Z. Doulgeri, “A reversible dynamic movement primitive formulation,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 3147–3153.
  15. T. Gašpar, B. Nemec, J. Morimoto, and A. Ude, “Skill learning and action recognition by arc-length dynamic movement primitives,” Robotics and Autonomous Systems, vol. 100, pp. 225–235, 2018. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0921889017302695
  16. I. Sarantopoulos, M. Kiatos, Z. Doulgeri, and S. Malassiotis, “Total singulation with modular reinforcement learning,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 4117–4124, 2021.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube