Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Modal Data Fusion in Enhancing Human-Machine Interaction for Robotic Applications: A Survey (2202.07732v3)

Published 15 Feb 2022 in cs.HC

Abstract: Human-machine interaction has been around for several decades now, with new applications emerging every day. One of the major goals that remain to be achieved is designing an interaction similar to how a human interacts with another human. Therefore, there is a need to develop interactive systems that could replicate a more realistic and easier human-machine interaction. On the other hand, developers and researchers need to be aware of state-of-the-art methodologies being used to achieve this goal. We present this survey to provide researchers with state-of-the-art data fusion technologies implemented using multiple inputs to accomplish a task in the robotic application domain. Moreover, the input data modalities are broadly classified into uni-modal and multi-modal systems and their application in myriad industries, including the health care industry, which contributes to the medical industry's future development. It will help the professionals to examine patients using different modalities. The multi-modal systems are differentiated by a combination of inputs used as a single input, e.g., gestures, voice, sensor, and haptic feedback. All these inputs may or may not be fused, which provides another classification of multi-modal systems. The survey concludes with a summary of technologies in use for multi-modal systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tauheed Khan Mohd (5 papers)
  2. Nicole Nguyen (1 paper)
  3. Ahmad Y Javaid (12 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.