Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

iRoCo: Intuitive Robot Control From Anywhere Using a Smartwatch (2403.07199v1)

Published 11 Mar 2024 in cs.RO

Abstract: This paper introduces iRoCo (intuitive Robot Control) - a framework for ubiquitous human-robot collaboration using a single smartwatch and smartphone. By integrating probabilistic differentiable filters, iRoCo optimizes a combination of precise robot control and unrestricted user movement from ubiquitous devices. We demonstrate and evaluate the effectiveness of iRoCo in practical teleoperation and drone piloting applications. Comparative analysis shows no significant difference between task performance with iRoCo and gold-standard control systems in teleoperation tasks. Additionally, iRoCo users complete drone piloting tasks 32\% faster than with a traditional remote control and report less frustration in a subjective load index questionnaire. Our findings strongly suggest that iRoCo is a promising new approach for intuitive robot control through smartwatches and smartphones from anywhere, at any time. The code is available at www.github.com/wearable-motion-capture

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. A. Ajoudani, A. M. Zanchettin, S. Ivaldi, A. Albu-Schäffer, K. Kosuge, and O. Khatib, “Progress and prospects of the human–robot collaboration,” Autonomous Robots, vol. 42, pp. 957–975, 2018.
  2. I. Patzer and T. Asfour, “Minimal sensor setup in lower limb exoskeletons for motion classification based on multi-modal sensor data,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019, pp. 8164–8170.
  3. S. Schaal, “Learning from demonstration,” in Advances in Neural Information Processing Systems, M. Mozer, M. Jordan, and T. Petsche, Eds., vol. 9.   MIT Press, 1996.
  4. H. Ben Amor, G. Neumann, S. Kamthe, O. Kroemer, and J. Peters, “Interaction primitives for human-robot cooperation tasks,” in Proceedings of 2014 IEEE International Conference on Robotics and Automation.   IEEE, 2014, pp. 2831–2837.
  5. G. Nagymáté and R. M. Kiss, “Application of optitrack motion capture systems in human movement analysis: A systematic literature review,” Recent Innovations in Mechatronics, vol. 5, no. 1., p. 1–9., Jul. 2018.
  6. Y. Desmarais, D. Mottet, P. Slangen, and P. Montesinos, “A review of 3d human pose estimation algorithms for markerless motion capture,” Computer Vision and Image Understanding, vol. 212, p. 103275, 2021.
  7. T. von Marcard, B. Rosenhahn, M. J. Black, and G. Pons-Moll, “Sparse inertial poser: Automatic 3d human pose estimation from sparse imus,” CoRR, vol. abs/1703.08014, 2017.
  8. X. Yi, Y. Zhou, and F. Xu, “Transpose: Real-time 3d human translation and pose estimation with six inertial sensors,” ACM Transactions on Graphics (TOG), vol. 40, no. 4, pp. 1–13, 2021.
  9. D. Roetenberg, H. Luinge, P. Slycke, et al., “Xsens mvn: Full 6dof human motion tracking using miniature inertial sensors,” Xsens Motion Technologies BV, Tech. Rep, vol. 1, pp. 1–7, 2009.
  10. V. Villani, B. Capelli, C. Secchi, C. Fantuzzi, and L. Sabattini, “Humans interacting with multi-robot systems: a natural affect-based approach,” Autonomous Robots, vol. 44, pp. 601–616, 2020.
  11. F. C. Weigend, S. Sonawani, M. Drolet, and H. B. Amor, “Anytime, anywhere: Human arm pose from smartwatch data for ubiquitous robot control and teleoperation,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 3811–3818.
  12. S. Shen, H. Wang, and R. Roy Choudhury, “I am a smartwatch and i can track my user’s arm,” in Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services.   ACM, 2016, pp. 85–96.
  13. W. Wei, K. Kurita, J. Kuang, and A. Gao, “Real-time limb motion tracking with a single imu sensor for physical therapy exercises,” in 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC).   IEEE, 2021, pp. 7152–7157.
  14. M. Liu, S. Yang, W. Chomsin, and W. Du, “Real-time tracking of smartwatch orientation and location by multitask learning,” in Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems, 2022, pp. 120–133.
  15. V. Villani, L. Sabattini, G. Riggio, A. Levratti, C. Secchi, and C. Fantuzzi, “Interacting with a mobile robot with a natural infrastructure-less interface,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 12 753–12 758, 2017.
  16. A. Kloss, G. Martius, and J. Bohg, “How to train your differentiable filter,” Autonomous Robots, pp. 1–18, 2021.
  17. X. Liu, G. Clark, J. Campbell, Y. Zhou, and H. B. Amor, “Enhancing state estimation in robots: A data-driven approach with differentiable ensemble kalman filters,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 1947–1954.
  18. X. Liu, Y. Zhou, S. Ikemoto, and H. B. Amor, “α𝛼\alphaitalic_α-mdf: An attention-based multimodal differentiable filter for robot state estimation,” in 7th Annual Conference on Robot Learning, 2023.
  19. B.-G. Lee, B.-L. Lee, and W.-Y. Chung, “Smartwatch-based driver alertness monitoring with wearable motion and physiological sensor,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).   IEEE, 2015, pp. 6126–6129.
  20. H. Wicaksono, I. Sugiarto, P. Santoso, G. Ricardo, and J. Halim, “Towards autonomous robot application and human pose detection for elders monitoring,” in 2022 6th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE).   IEEE, 2022, pp. 772–776.
  21. S. S. Rangapuram, M. W. Seeger, J. Gasthaus, L. Stella, Y. Wang, and T. Januschowski, “Deep state space models for time series forecasting,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31.   Curran Associates, Inc., 2018.
  22. A. Klushyn, R. Kurle, M. Soelch, B. Cseke, and P. van der Smagt, “Latent matters: Learning deep state-space models,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  23. B. Wagstaff, E. Wise, and J. Kelly, “A self-supervised, differentiable kalman filter for uncertainty-aware visual-inertial odometry,” in 2022 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).   IEEE, 2022, pp. 1388–1395.
  24. M. A. Lee, B. Yi, R. Martín-Martín, S. Savarese, and J. Bohg, “Multimodal sensor fusion with differentiable filters,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 10 444–10 451.
  25. X. Liu, S. Ikemoto, Y. Yoshimitsu, and H. B. Amor, “Learning soft robot dynamics using differentiable kalman filters and spatio-temporal embeddings,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 2550–2557.
  26. Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li, “On the continuity of rotation representations in neural networks,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, 2019, pp. 5738–5746.
  27. G. Evensen, “The ensemble kalman filter: Theoretical formulation and practical implementation,” Ocean dynamics, vol. 53, no. 4, pp. 343–367, 2003.
  28. Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning.   PMLR, 2016, pp. 1050–1059.
  29. J. Wang and E. Olson, “AprilTag 2: Efficient and robust fiducial detection,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.
  30. M. Feick, N. Kleer, A. Tang, and A. Krüger, “The virtual reality questionnaire toolkit,” in Adjunct Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020, pp. 68–69.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com