Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Motion Prediction with Gaussian Processes for Safe Human-Robot Interaction in Virtual Environments (2405.09109v2)

Published 15 May 2024 in cs.RO, cs.AI, and cs.LG

Abstract: Humans use collaborative robots as tools for accomplishing various tasks. The interaction between humans and robots happens in tight shared workspaces. However, these machines must be safe to operate alongside humans to minimize the risk of accidental collisions. Ensuring safety imposes many constraints, such as reduced torque and velocity limits during operation, thus increasing the time to accomplish many tasks. However, for applications such as using collaborative robots as haptic interfaces with intermittent contacts for virtual reality applications, speed limitations result in poor user experiences. This research aims to improve the efficiency of a collaborative robot while improving the safety of the human user. We used Gaussian process models to predict human hand motion and developed strategies for human intention detection based on hand motion and gaze to improve the time for the robot and human security in a virtual environment. We then studied the effect of prediction. Results from comparisons show that the prediction models improved the robot time by 3\% and safety by 17\%. When used alongside gaze, prediction with Gaussian process models resulted in an improvement of the robot time by 2\% and the safety by 13\%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. P. A. Lasota, T. Fong, and J. A. Shah, “A survey of methods for safe human-robot interaction,” Foundations and Trends® in Robotics, vol. 5, no. 4, pp. 261–349, 2017. [Online]. Available: http://dx.doi.org/10.1561/2300000052
  2. P. Long, C. Chevallereau, D. Chablat, and A. Girin, “An industrial security system for human-robot coexistence,” Industrial Robot: An International Journal, vol. 45, no. 2, pp. 220–226, 2018.
  3. C.-G. Lee, I. Oakley, E.-S. Kim, and J. Ryu, “Impact of visual-haptic spatial discrepancy on targeting performance,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 8, pp. 1098–1108, 2016.
  4. M. Di Luca, B. Knörlein, M. O. Ernst, and M. Harders, “Effects of visual-haptic asynchronies and loading-unloading movements on compliance perception.” Brain research bulletin, vol. 85, no. 5, pp. 245–259, jun 2011.
  5. B. I. Ahmad, P. M. Langdon, S. J. Godsill, R. Donkor, R. Wilde, and L. Skrypchuk, “You do not have to touch to select: A study on predictive in-car touchscreen with mid-air selection,” in Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ser. Automotive’UI 16.   New York, NY, USA: Association for Computing Machinery, 2016, p. 113–120. [Online]. Available: https://doi.org/10.1145/3003715.3005461
  6. Y. Kim and Y. Kim, “Versatile encountered-type haptic display for vr environment using a 7-dof manipulator,” in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’2016), 2016.
  7. Y. Kim, H. J. Kim, and Y. J. Kim, “Encountered-type haptic display for large vr environment using per-plane reachability maps,” Computer Animation and Virtual Worlds, vol. 29, no. 3-4, p. e1814, 2018, e1814 cav.1814. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/cav.1814
  8. C. T. Landi, Y. Cheng, F. Ferraguti, M. Bonfè, C. Secchi, and M. Tomizuka, “Prediction of human arm target for robot reaching movements,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 5950–5957.
  9. H. Ding, G. Reißig, K. Wijaya, D. Bortot, K. Bengler, and O. Stursberg, “Human arm motion modeling and long-term prediction for safe and efficient human-robot-interaction,” in 2011 IEEE International Conference on Robotics and Automation, 2011, pp. 5875–5880.
  10. V. Guda, S. Mugisha, C. Chevallereau, M. Zoppi, R. Molfino, and D. Chablat, “Motion Strategies for a Cobot in a Context of Intermittent Haptic Interface,” Journal of Mechanisms and Robotics, pp. 1–13, 05 2022. [Online]. Available: https://doi.org/10.1115/1.4054509
  11. S. Mugisha, M. Zoppi, R. Molfino, V. Guda, C. Chevallereau, and D. Chablat, “Safe collaboration between human and robot in a context of intermittent haptique interface,” in ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, 2021.
  12. S. Mugisha, V. K. Guda, C. Chevallereau, M. Zoppi, R. Molfino, and D. Chablat, “Improving haptic response for contextual human robot interaction,” Sensors, vol. 22, no. 5, 2022. [Online]. Available: https://www.mdpi.com/1424-8220/22/5/2040
  13. T. Callens, T. van der Have, S. V. Rossom, J. De Schutter, and E. Aertbeliën, “A framework for recognition and prediction of human motions in human-robot collaboration using probabilistic motion models,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5151–5158, 2020.
  14. Q. Li, Z. Zhang, Y. You, Y. Mu, and C. Feng, “Data driven models for human motion prediction in human-robot collaboration,” IEEE Access, vol. 8, pp. 227 690–227 702, 2020.
  15. A. M. Zanchettin and P. Rocco, “Probabilistic inference of human arm reaching target for effective human-robot collaboration,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 6595–6600.
  16. A. Namiki, Y. Matsumoto, T. Maruyama, and Y. Liu, “Vision-based predictive assist control on master-slave systems,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 5357–5362.
  17. H. C. Ravichandar and A. P. Dani, “Human intention inference using expectation-maximization algorithm with online model learning,” IEEE Transactions on Automation Science and Engineering, vol. 14, no. 2, pp. 855–868, 2017.
  18. L. Vianello, J.-B. Mouret, E. Dalin, A. Aubry, and S. Ivaldi, “Human posture prediction during physical human-robot interaction,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 6046–6053, 2021.
  19. K. Strabala, M. K. Lee, A. Dragan, J. Forlizzi, S. S. Srinivasa, M. Cakmak, and V. Micelli, “Toward seamless human-robot handovers,” J. Hum.-Robot Interact., vol. 2, no. 1, p. 112–132, feb 2013. [Online]. Available: https://doi.org/10.5898/JHRI.2.1.Strabala
  20. V. Ortenzi, A. Cosgun, T. Pardi, W. P. Chan, E. Croft, and D. Kulić, “Object handovers: A review for robotics,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1855–1873, 2021.
  21. CLARTE, CNRS/LS2N, INRIA/Hybrid, and Renault, “Lobbybot project,” https://www.lobbybot.fr/, 2020, accessed: 2022-08-15.
  22. Z. Peng, T. Genewein, and D. Braun, “Assessing randomness and complexity in human motion trajectories through analysis of symbolic sequences,” Frontiers in Human Neuroscience, vol. 8, 2014. [Online]. Available: https://www.frontiersin.org/articles/10.3389/fnhum.2014.00168
  23. J. Li, Z. Li, X. Li, Y. Feng, Y. Hu, and B. Xu, “Skill learning strategy based on dynamic motion primitives for human–robot cooperative manipulation,” IEEE Transactions on Cognitive and Developmental Systems, vol. 13, no. 1, pp. 105–117, 2021.
  24. K.-B. Park, S. H. Choi, J. Y. Lee, Y. Ghasemi, M. Mohammed, and H. Jeong, “Hands-free human–robot interaction using multimodal gestures and deep learning in wearable mixed reality,” IEEE Access, vol. 9, pp. 55 448–55 464, 2021.
  25. B. Ti, Y. Gao, Q. Li, and J. Zhao, “Human intention understanding from multiple demonstrations and behavior generalization in dynamic movement primitives framework,” IEEE Access, vol. 7, pp. 36 186–36 194, 2019.
  26. S. Ambikasaran, D. Foreman-Mackey, L. Greengard, D. W. Hogg, and M. O’Neil, “Fast direct methods for gaussian processes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 252–265, 2016.
  27. E. Snelson and Z. Ghahramani, “Sparse gaussian processes using pseudo-inputs,” in Proceedings of the 18th International Conference on Neural Information Processing Systems, ser. NIPS’05.   Cambridge, MA, USA: MIT Press, 2005, p. 1257–1264.
  28. A. Solin, J. Hensman, and R. E. Turner, “Infinite-horizon gaussian processes,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, ser. NIPS’18.   Red Hook, NY, USA: Curran Associates Inc., 2018, p. 3490–3499.
  29. J. Lim, J. Park, S. Nah, and J. Choi, “Multi-output infinite horizon gaussian processes,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 1542–1549.
  30. J. Posselt, L. Dominjon, B. A, and K. A, “Toward virtual touch: investigating encounter-type haptics for perceived quality assessment in the automotive industry,” in Proceedings of the 14th annual EuroVR conference, 2017, pp. 11–13.
  31. V. R. Mercado, M. Marchal, and A. Lécuyer, “Entropia: Towards infinite surface haptic displays in virtual reality using encountered-type rotating props,” IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 3, pp. 2237–2243, 2019.
  32. S. V. Salazar, C. Pacchierotti, X. de Tinguy, A. Maciel, and M. Marchal, “Altering the stiffness, friction, and shape perception of tangible objects in virtual reality using wearable haptics,” IEEE Transactions on Haptics (ToH), vol. 13, no. 1, pp. 167–174, 2020.
  33. A. Gutierrez, V. K. Guda, S. Mugisha, C. Chevallereau, and D. Chablat, “Trajectory planning in dynamics environment: Application for haptic perception in safe human-robot interaction,” in Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Health, Operations Management, and Design, V. G. Duffy, Ed.   Cham: Springer International Publishing, 2022, pp. 313–328.
  34. A. Girard, C. E. Rasmussen, J. Q. n. Candela, and R. Murray-Smith, “Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting,” in Proceedings of the 15th International Conference on Neural Information Processing Systems, ser. NIPS’02.   Cambridge, MA, USA: MIT Press, 2002, p. 545–552.
  35. T. A. Jost, G. Drewelow, S. Koziol, and J. Rylander, “A quantitative method for evaluation of 6 degree of freedom virtual reality systems,” Journal of Biomechanics, vol. 97, p. 109379, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0021929019306050
  36. D. Bowman, D. Koller, and L. Hodges, “Travel in immersive virtual environments: an evaluation of viewpoint motion control techniques,” in Proceedings of IEEE 1997 Annual International Symposium on Virtual Reality, 1997, pp. 45–52.
  37. J. T. Bell and H. S. Fogler, “The investigation and application of virtual reality as an educational tool,” in Proceedings of the American society for engineering education annual conference, 1995, pp. 1718–1728.
  38. R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM Journal on Scientific Computing, vol. 16, no. 5, pp. 1190–1208, 1995. [Online]. Available: https://doi.org/10.1137/0916069
  39. V. K. Guda, D. Chablat, and C. Chevallereau, “Safety in a human robot interactive: Application to haptic perception,” in Virtual, Augmented and Mixed Reality. Design and Interaction.   Berlin, Heidelberg: Springer-Verlag, Jul 2020, p. 562–574. [Online]. Available: https://doi.org/10.1007/978-3-030-49695-138
Citations (1)

Summary

We haven't generated a summary for this paper yet.