Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control (2104.03893v5)

Published 8 Apr 2021 in cs.RO, cs.AI, cs.CV, cs.HC, and eess.SP

Abstract: Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities. Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components. Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%. Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. Learning emg control of a robotic hand: towards active prostheses. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. (IEEE), 2819–2823
  2. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934
  3. A hybrid 3d printed hand prosthesis prototype based on semg and a fully embedded computer vision system. Frontiers in Neurorobotics 15, 751282
  4. Egocentric gesture recognition for head-mounted ar devices. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (IEEE), 109–114
  5. A novel approach to classify natural grasp actions by estimating muscle activity patterns from eeg signals. In 2020 8th international winter conference on brain-computer interface (BCI) (IEEE), 1–4
  6. A semiautonomous control strategy based on computer vision for a hand–wrist prosthesis. Robotics 12, 152
  7. Probability density of the surface electromyogram and its relation to amplitude detectors. IEEE Transactions on Biomedical Engineering 46, 730–739
  8. Improving robotic hand prosthesis control with eye tracking and computer vision: A multimodal approach based on the visuomotor behavior of grasping. Frontiers in artificial intelligence 4, 199
  9. Dreyfus, H. L. (1992). What computers still can’t do: A critique of artificial reason (MIT press)
  10. Farrell, T. R. et al. (2008). A comparison of the effects of electrode implantation and targeting on pattern classification accuracy for prosthesis control. IEEE Transactions on Biomedical Engineering 55, 2198–2211
  11. The grasp taxonomy of human grasp types. IEEE Transactions on Human-Machine Systems 46, 66–77
  12. Synthesizing training data for object detection in indoor scenes. arXiv preprint arXiv:1702.07836
  13. Extremely randomized trees. Machine learning 63, 3–42
  14. Simple copy-paste is a strong data augmentation method for instance segmentation. arXiv preprint arXiv:2012.07177
  15. Muscle synergy-based grasp classification for robotic hand prosthetics. In Proceedings of the 10th international conference on pervasive technologies related to assistive environments. 335–338
  16. Towards human-in-the-loop shared control for upper-limb prostheses: A systematic analysis of state-of-the-art technologies. IEEE Transactions on Medical Robotics and Bionics
  17. Current state of digital signal processing in myoelectric interfaces and related applications. Biomedical Signal Processing and Control 18, 334–359
  18. Greedy gaussian segmentation of multivariate time series. Advances in Data Analysis and Classification 13, 727–751
  19. Hands: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. Intelligent service robotics 13, 179–185
  20. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision. 2961–2969
  21. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778
  22. Myoelectric signal processing: Optimal estimation applied to electromyography-part i: Derivation of the optimal myoprocessor. IEEE Transactions on Biomedical Engineering , 382–395
  23. Real-time robustness evaluation of regression based myoelectric control against arm position change and donning/doffing. PloS one 12, e0186318
  24. [Dataset] iMatix (2021). Zeromq. https://github.com/zeromq/libzmq
  25. A survey on activities of daily living and occupations of upper extremity amputees. Annals of rehabilitation medicine 35, 907
  26. Jeannerod, M. (1984). The timing of natural prehension movements. Journal of motor behavior 16, 235–254
  27. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication. 1151–1160
  28. Muscles: Testing and function, with posture and pain (kendall, muscles). LWW
  29. Causes of performance degradation in non-invasive electromyographic pattern recognition in upper limb prostheses. Frontiers in neurorobotics 12, 58
  30. Microsoft coco: Common objects in context. In European conference on computer vision (Springer), 740–755
  31. Predicting mechanically driven full-field quantities of interest with deep learning-based metamodels. Extreme Mechanics Letters 50, 101566
  32. Indoor segmentation and support inference from rgbd images. In ECCV
  33. Grasping time and pose selection for robotic prosthetic hand control using deep learning based object detection. International Journal of Control, Automation and Systems 20, 3410–3417
  34. Poisson image editing. In ACM SIGGRAPH 2003 Papers. 313–318
  35. Feature reduction and selection for emg signal classification. Expert systems with applications 39, 7420–7431
  36. A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR) 51, 1–36
  37. The deka arm: Its features, functionality, and evolution during the veterans affairs study to optimize the deka arm. Prosthetics and orthotics international 38, 492–504
  38. ” grabcut” interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG) 23, 309–314
  39. Computer vision-based grasp pattern recognition with application to myoelectric control of dexterous hand prosthesis. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28, 2090–2099
  40. The limits and potentials of deep learning for robotics. The International journal of robotics research 37, 405–420
  41. Grasp pre-shape selection by synthetic training: Eye-in-hand shared control on the hannes prosthesis. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 13112–13119. 10.1109/IROS47612.2022.9981035
  42. Myoelectric signal classification of targeted muscles using dictionary learning. Sensors 19, 2370
  43. Zaharescu, A. (2005). An object grasping literature survey in computer vision and robotics
  44. Towards creating a deployable grasp type probability estimator for a prosthetic hand. In International Workshop on Design, Modeling, and Evaluation of Cyber Physical Systems (Springer), 44–58
  45. Estimating the prevalence of limb loss in the united states: 2005 to 2050. Archives of Physical Medicine and Rehabilitation 89, 422–429. https://doi.org/10.1016/j.apmr.2007.11.005
Citations (13)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets