Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Advancements in Tactile Hand Gesture Recognition for Enhanced Human-Machine Interaction (2405.17038v1)

Published 27 May 2024 in cs.HC and cs.AI

Abstract: Motivated by the growing interest in enhancing intuitive physical Human-Machine Interaction (HRI/HVI), this study aims to propose a robust tactile hand gesture recognition system. We performed a comprehensive evaluation of different hand gesture recognition approaches for a large area tactile sensing interface (touch interface) constructed from conductive textiles. Our evaluation encompassed traditional feature engineering methods, as well as contemporary deep learning techniques capable of real-time interpretation of a range of hand gestures, accommodating variations in hand sizes, movement velocities, applied pressure levels, and interaction points. Our extensive analysis of the various methods makes a significant contribution to tactile-based gesture recognition in the field of human-machine interaction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (45)
  1. M. Kaboli, D. Feng, K. Yao, P. Lanillos, and G. Cheng, “A tactile-based framework for active object learning and discrimination using multimodal robotic skin,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 2143–2150, 2017.
  2. M. Kaboli and G. Cheng, “Robust tactile descriptors for discriminating objects from textural properties via artificial robotic skin,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 985–1003, 2018.
  3. M. Kaboli, K. Yao, D. Feng, and G. Cheng, “Tactile-based active object discrimination and target object search in an unknown workspace,” Autonomous Robots, vol. 43, pp. 123–152, 2019.
  4. P. K. Murali, M. Kaboli, and R. Dahiya, “Intelligent in-vehicle interaction technologies,” Advanced Intelligent Systems, vol. 4, no. 2, p. 2100122, 2022.
  5. J. Xu and Others, “Recent progress of tactile and force sensors for human-machine interaction,” Sensors, vol. 23, no. 4, p. 1868, 2023.
  6. Q. Debard, C. Wolf, S. Canu, and J. Arné, “Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling,” 2018.
  7. Y. Sandamirskaya, M. Kaboli, J. Conradt, and T. Celikel, “Neuromorphic computing hardware and neural architectures for robotics,” Science Robotics, vol. 7, no. 67, p. eabl8419, 2022.
  8. Y. Pang, X. Xu, S. Chen, Y. Fang, X. Shi, Y. Deng, Z. Wang, and C. Cao, “Skin-inspired textile-based tactile sensors enable multifunctional sensing of wearables and soft robots,” Nano Energy, vol. 96, p. 107137, 03 2022.
  9. Y. Zhang, Z. Lin, X. Huang, X. You, J. Ye, and H. Wu, “A large‐area, stretchable, textile‐based tactile sensor,” Advanced Materials Technologies, vol. 5, p. 1901060, 02 2020.
  10. M. Kaboli et al., “Humanoids learn touch modalities identification via multi-modal robotic skin and robust tactile descriptors,” Advanced Robotics, vol. 29, no. 21, pp. 1411–1425, 2015.
  11. M. Kaboli, K. Yao, and G. Cheng, “Tactile-based manipulation of deformable objects with dynamic center of mass,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids).   IEEE, 2016, pp. 752–757.
  12. P. K. Murali, A. Dutta, M. Gentner, E. Burdet, R. Dahiya, and M. Kaboli, “Active visuo-tactile interactive robotic perception for accurate object pose estimation in dense clutter,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4686–4693, 2022.
  13. T. Witchuda, A. Wiranata, S. Maeda, and C. Premachandra, “Reservoir computing model for human hand locomotion signal classification,” IEEE Access, vol. 11, pp. 19 591–19 601, 2023.
  14. W. Thongking, A. Wiranata, S. Maeda, and C. Premachandra, “Implementation of reservoir computing algorithm in stretchable sensor for wearable device,” in 2022 International Symposium on Micro-NanoMehatronics and Human Science (MHS).   IEEE, 2022, pp. 1–6.
  15. F. Liu, S. Deswal, A. Christou, Y. Sandamirskaya, M. Kaboli, and R. Dahiya, “Neuro-inspired electronic skin for robots,” Science Robotics, vol. 7, no. 67, p. eabl7344, 2022. [Online]. Available: https://www.science.org/doi/abs/10.1126/scirobotics.abl7344
  16. Q. li, O. Kroemer, Z. Su, F. Veiga, M. Kaboli, and H. Ritter, “A review of tactile information: Perception and action through touch,” IEEE Transactions on Robotics, vol. PP, pp. 1–16, 07 2020.
  17. V.-C. Ta, W. Johal, M. Portaz, E. Castelli, and D. Vaufreydaz, “The grenoble system for the social touch challenge at icmi 2015,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 2015, pp. 391–398. [Online]. Available: https://doi.org/10.1145/2818346.2830598
  18. M. M. Jung, M. Poel, R. Poppe, and D. K. J. Heylen, “Automatic recognition of touch gestures in the corpus of social touch,” Journal on Multimodal User Interfaces, vol. 11, no. 1, pp. 81–96, 2017. [Online]. Available: https://doi.org/10.1007/s12193-016-0232-9
  19. W. D. Stiehl, J. Lieberman, C. L. Breazeal, L. Basel, L. Lalla, and M. Wolf, “Design of a therapeutic robotic companion for relational, affective touch,” ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005., pp. 408–415, 2005.
  20. D. Silvera-Tawil, D. Rye, and M. Velonaki, “Artificial skin and tactile sensing for socially interactive robots: A review,” Robotics and Autonomous Systems, vol. 63, pp. 230–243, 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0921889014001833
  21. M. Hertenstein, R. Holmes, M. McCullough, and D. Keltner, “The communication of emotion via touch,” Emotion (Washington, D.C.), vol. 9, pp. 566–73, 2009.
  22. G. Barrett and R. Omote, “Projected-capacitive touch technology,” Information Display, vol. 26, no. 3, pp. 16–21, 2010. [Online]. Available: https://sid.onlinelibrary.wiley.com/doi/abs/10.1002/j.2637-496X.2010.tb00229.x
  23. H. V. Le, S. Mayer, and N. Henze, “Investigating the feasibility of finger identification on capacitive touchscreens using deep learning,” in Proceedings of the 24th International Conference on Intelligent User Interfaces, ser. IUI ’19.   New York, NY, USA: Association for Computing Machinery, 2019, p. 637–649. [Online]. Available: https://doi.org/10.1145/3301275.3302295
  24. Y.-m. Kim, S.-y. Koo, J. G. Lim, and D.-s. Kwon, “A robust online touch pattern recognition for dynamic human-robot interaction,” IEEE Transactions on Consumer Electronics, vol. 56, no. 3, pp. 1979–1987, 2010.
  25. D. Silvera Tawil, D. Rye, and M. Velonaki, “Interpretation of the modality of touch on an artificial arm covered with an eit-based sensitive skin,” The International Journal of Robotics Research, vol. 31, no. 13, pp. 1627–1641, 2012. [Online]. Available: https://doi.org/10.1177/0278364912455441
  26. F. Naya, J. Yamato, and K. Shinozawa, “Recognizing human touching behaviors using a haptic interface for a pet-robot,” in IEEE SMC’99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), vol. 2, 1999, pp. 1030–1034 vol.2.
  27. Y. Li, “Protractor: A fast and accurate gesture recognizer,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.   New York, NY, USA: ACM, 2010, pp. 2169–2172.
  28. T. Rakthanmanon, B. Campana, A. Mueen, G. Batista, B. Westover, Q. Zhu, J. Zakaria, and E. Keogh, “Searching and mining trillions of time series subsequences under dynamic time warping,” in Proc. of the 18th ACM Int. Conf. on Knowledge Discovery and Data Mining.   New York, NY, USA: ACM, 2012, pp. 262–270.
  29. R.-D. Vatavu, “The effect of sampling rate on the performance of template-based gesture recognizers,” in Proc. of the 13th Int. Conf. on Multimodal Interfaces.   ACM, 2011, pp. 271–278.
  30. R.-D. Vatavu, L. Anthony, and J. O. Wobbrock, “Gestures as point clouds: A recognizer for user interface prototypes,” in Proc. of the 14th ACM Int. Conference on Multimodal Interaction.   New York, NY, USA: ACM, 2012, pp. 273–280.
  31. R. Blagojevic, S. H.-H. Chang, and B. Plimmer, “The power of automatic feature selection: Rubine on steroids,” in Proc. of the Seventh Sketch-Based Interfaces and Modeling Symposium.   Eurographics Association, 2010, pp. 79–86. [Online]. Available: http://dl.acm.org/citation.cfm?id=1923363.1923377
  32. D. H. Rubine, “Specifying gestures by example,” in Proceedings of Siggraph ’91, 1991, pp. 329–337.
  33. T. M. Sezgin and R. Davis, “Hmm-based efficient sketch recognition,” in Proc. of the 10th Int. Conf. on Intelligent User Interfaces.   New York, NY, USA: ACM, 2005, pp. 281–283.
  34. A. D. Wilson and A. F. Bobick, “Parametric hidden markov models for gesture recognition,” IEEE TPAMI, vol. 21, no. 9, pp. 884–900, 1999.
  35. D. Willems, R. Niels, M. van Gerven, and L. Vuurpijl, “Iconic and multi-stroke gesture recognition,” Pattern Recognition, vol. 42, no. 12, pp. 3303–3312, 2009.
  36. W. Huang, L. Zhang, W. Gao, F. Min, and J. He, “Shallow convolutional neural networks for human activity recognition using wearable sensors,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–11, 2021.
  37. M. M. Jung, R. Poppe, M. Poel, and D. K. Heylen, “Touching the void – introducing cost: Corpus of social touch,” in Proceedings of the 16th International Conference on Multimodal Interaction.   Association for Computing Machinery, 2014, p. 120–127. [Online]. Available: https://doi.org/10.1145/2663204.2663242
  38. M. Alameh, Y. Abbass, A. Ibrahim, G. Moser, and M. Valle, “Touch modality classification using recurrent neural networks,” IEEE Sensors Journal, vol. 21, no. 8, pp. 9983–9993, 2021.
  39. R. Aigner, A. Pointner, T. Preindl, R. Danner, and M. Haller, “Texyz: Embroidering enameled wires for three degree-of-freedom mutual capacitive sensing,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ser. CHI ’21.   New York, NY, USA: Association for Computing Machinery, 2021. [Online]. Available: https://doi.org/10.1145/3411764.3445479
  40. (1997) Cypress psoc 4. [Online]. Available: https://www.infineon.com/cms/en/product/evaluation-boards/cy8ckit-145-40xx/
  41. A. F. Matt Wright, David Wessel. (1997) Open sound control. [Online]. Available: https://ccrma.stanford.edu/groups/osc/index.html
  42. Y.-K. Li, Q.-H. Meng, and H.-W. Zhang, “Touch gesture recognition using spatiotemporal fusion features,” IEEE Sensors Journal, vol. 22, no. 1, pp. 428–437, 2022.
  43. M. M. Jung, “Towards social touch intelligence: Developing a robust system for automatic touch recognition,” in Proceedings of the 16th International Conference on Multimodal Interaction.   Association for Computing Machinery, 2014, p. 344–348. [Online]. Available: https://doi.org/10.1145/2663204.2666281
  44. O. M. Sincan and H. Y. Keles, “Using motion history images with 3d convolutional networks in isolated sign language recognition,” CoRR, vol. abs/2110.12396, 2021. [Online]. Available: https://arxiv.org/abs/2110.12396
  45. F. Karim, S. Majumdar, H. Darabi, and S. Chen, “Lstm fully convolutional networks for time series classification,” IEEE Access, vol. 6, pp. 1662–1669, 2018.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets