Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MRNaB: Mixed Reality-based Robot Navigation Interface using Optical-see-through MR-beacon (2403.19310v1)

Published 28 Mar 2024 in cs.RO

Abstract: Recent advancements in robotics have led to the development of numerous interfaces to enhance the intuitiveness of robot navigation. However, the reliance on traditional 2D displays imposes limitations on the simultaneous visualization of information. Mixed Reality (MR) technology addresses this issue by enhancing the dimensionality of information visualization, allowing users to perceive multiple pieces of information concurrently. This paper proposes Mixed reality-based robot navigation interface using an optical-see-through MR-beacon (MRNaB), a novel approach that incorporates an MR-beacon, situated atop the real-world environment, to function as a signal transmitter for robot navigation. This MR-beacon is designed to be persistent, eliminating the need for repeated navigation inputs for the same location. Our system is mainly constructed into four primary functions: "Add", "Move", "Delete", and "Select". These allow for the addition of a MR-beacon, location movement, its deletion, and the selection of MR-beacon for navigation purposes, respectively. The effectiveness of the proposed method was then validated through experiments by comparing it with the traditional 2D system. As the result, MRNaB was proven to increase the performance of the user when doing navigation to a certain place subjectively and objectively. For additional material, please check: https://mertcookimg.github.io/mrnab

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, “ROS: an open-source Robot Operating System,” vol. 3, 2009.
  2. A.-T. Ngo, N.-H. Tran, T.-P. Ton, H. Nguyen, and T.-P. Tran, “Simulation of hybrid autonomous underwater vehicle based on ros and gazebo,” in 2021 International Conference on Advanced Technologies for Communications (ATC), 2021, pp. 109–113.
  3. A. B. Cruz, A. Sousa, and L. P. Reis, “Controller for real and simulated wheelchair with a multimodal interface using gazebo and ros,” in 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2020, pp. 164–169.
  4. G. Beraldo, N. Castaman, R. Bortoletto, E. Pagello, J. del R. Millán, L. Tonin, and E. Menegatti, “Ros-health: An open-source framework for neurorobotics,” in 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), 2018, pp. 174–179.
  5. S. S. Velamala, D. Patil, and X. Ming, “Development of ros-based gui for control of an autonomous surface vehicle,” in 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2017, pp. 628–633.
  6. K. C. Hoang, W. P. Chan, S. Lay, A. Cosgun, and E. A. Croft, “ARviz: An Augmented Reality-Enabled Visualization Platform for ROS Applications,” IEEE Robotics & Automation Magazine, vol. 29, no. 1, pp. 58–67, 2022.
  7. M. Salvato, N. Heravi, A. M. Okamura, and J. Bohg, “Predicting hand-object interaction for improved haptic feedback in mixed reality,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3851–3857, 2022.
  8. A. Devo, J. Mao, G. Costante, and G. Loianno, “Autonomous single-image drone exploration with deep reinforcement learning and mixed reality,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5031–5038, 2022.
  9. Z. Makhataeva and H. A. Varol, “Augmented reality for robotics: A review,” Robotics, vol. 9, no. 2, 2020.
  10. W. Fan, X. Guo, E. Feng, J. Lin, Y. Wang, J. Liang, M. Garrad, J. Rossiter, Z. Zhang, N. Lepora, L. Wei, and D. Zhang, “Digital twin-driven mixed reality framework for immersive teleoperation with haptic rendering,” IEEE Robotics and Automation Letters, vol. 8, no. 12, pp. 8494–8501, 2023.
  11. L. Penco, K. Momose, S. McCrory, D. Anderson, N. Kitchel, D. Calvert, and R. J. Griffin, “Mixed reality teleoperation assistance for direct control of humanoids,” IEEE Robotics and Automation Letters, vol. 9, no. 2, pp. 1937–1944, 2024.
  12. C. Zhang, C. Lin, Y. Leng, Z. Fu, Y. Cheng, and C. Fu, “An effective head-based hri for 6d robotic grasping using mixed reality,” IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2796–2803, 2023.
  13. J. Lee, T. Lim, and W. Kim, “Investigating the Usability of Collaborative Robot Control Through Hands-Free Operation Using Eye Gaze and Augmented Reality,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2023, pp. 4101–4106.
  14. J. Chen, B. Sun, M. Pollefeys, and H. Blum, “A 3D Mixed Reality Interface for Human-Robot Teaming,” arXiv preprint arXiv:2310.02392, 2023, available: arXiv:2310.02392 [cs.RO].
  15. G. Zhang, D. Zhang, L. Duan, and G. Han, “Accessible Robot Control in Mixed Reality,” arXiv preprint arXiv:2306.02393, 2023, available: arXiv:2306.02393 [cs.RO].
  16. N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot simulator,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, 2004, pp. 2149–2154 vol.3.
  17. J. S. Cepeda, L. Chaimowicz, and R. Soto, “Exploring Microsoft Robotics Studio as a Mechanism for Service-Oriented Robotics,” in Proc. of the Latin American Robotics Symposium and Intelligent Robotics Meeting, 2010, pp. 7–12.
  18. O. Michel, “WebotsTM: Professional Mobile Robot Simulation,” International Journal of Advanced Robotic Systems, vol. 1, 2004.
  19. D. D. Rajapaksha, M. N. Mohamed Nuhuman, S. D. Gunawardhana, A. Sivalingam, M. N. Mohamed Hassan, S. Rajapaksha, and C. Jayawardena, “Web Based User-Friendly Graphical Interface to Control Robots with ROS Environment,” in Proc. of the International Conference on Information Technology Research, 2021, pp. 1–6.
  20. W. A. M. Fernando, C. Jayawardena, and U. U. S. Rajapaksha, “Developing A User-Friendly Interface from Robotic Applications Development,” in Proc. of the International Research Conference on Smart Computing and Systems Engineering, vol. 5, 2022, pp. 196–204.
  21. M. Aarizou, “ROS-based web application for an optimized multi-robots multi-users manipulation,” in Proc. of the National Conference in Computer Science Research and its Applications, 2023.
  22. H. Kam, S.-H. Lee, T. Park, and C.-H. Kim, “RViz: a toolkit for real domain data visualization,” Telecommunication Systems, vol. 60, pp. 1–9, 2015.
  23. I. Tiddi, E. Bastianelli, G. Bardaro, and E. Motta, “A User-friendly Interface to Control ROS Robotic Platforms,” in Proc. of the International Workshop on the Semantic Web, 2018.
  24. M. Gu, A. Cosgun, W. P. Chan, T. Drummond, and E. Croft, “Seeing Thru Walls: Visualizing Mobile Robots in Augmented Reality,” in Proc. of the IEEE International Conference on Robot & Human Interactive Communication.   IEEE, 2021.
  25. M. Walker, H. Hedayati, J. Lee, and D. Szafir, “Communicating Robot Motion Intent with Augmented Reality,” in Proc. of the ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 316–324.
  26. K. Owaki, N. Techasarntikul, and H. Shimonishi, “Human Behavior Analysis in Human-Robot Cooperation with AR Glasses,” in Proc. of the IEEE International Symposium on Mixed and Augmented Reality.   Los Alamitos, CA, USA: IEEE Computer Society, 2023, pp. 20–28.
  27. M. Zolotas and Y. Demiris, “Towards Explainable Shared Control using Augmented Reality,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019, pp. 3020–3026.
  28. M. Zolotas, J. Elsdon, and Y. Demiris, “Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 1823–1829.
  29. F. Regal, C. Petlowany, C. Pehlivanturk, C. Van Sice, C. Suarez, B. Anderson, and M. Pryor, “AugRE: Augmented Robot Environment to Facilitate Human-Robot Teaming and Communication,” in Proc. of the IEEE International Conference on Robot and Human Interactive Communication, 2022, pp. 800–805.
  30. T. Kot, P. Novák, and J. Bajak, “Using HoloLens to create a virtual operator station for mobile robots,” in Proc. of the International Carpathian Control Conference, 2018, pp. 422–427.
  31. A. Angelopoulos, A. Hale, H. Shaik, A. Paruchuri, K. Liu, R. Tuggle, and D. Szafir, “Drone Brush: Mixed Reality Drone Path Planning,” in Proc. of the ACM/IEEE International Conference on Human-Robot Interaction, 2022, pp. 678–682.
  32. R. C. Quesada and Y. Demiris, “Holo-SpoK: Affordance-Aware Augmented Reality Control of Legged Manipulators,” in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 856–862.
  33. C. Cruz, J. Cerro, and A. Barrientos, “Mixed-reality for quadruped-robotic guidance in SAR tasks,” Journal of Computational Design and Engineeringy, vol. 10, 2023.
  34. M. Wu, S.-L. Dai, and C. Yang, “Mixed Reality Enhanced User Interactive Path Planning for Omnidirectional Mobile Robot,” Applied Sciences, vol. 10, no. 3, p. 1135, 2020.
  35. J. Brooke, “SUS: A quick and dirty usability scale,” Usability Eval. Ind., vol. 189, 1995.
  36. S. S. Shapiro and M. B. Wilk, “An Analysis of Variance Test for Normality (Complete Samples),” Biometrika, vol. 52, no. 3/4, pp. 591–611, 1965.
  37. D. Rey and M. Neuhäuser, “Wilcoxon-Signed-Rank Test,” International Encyclopedia of Statistical Science, pp. 1658–1659, 2011.
Citations (2)

Summary

  • The paper introduces a novel MR-based robot navigation system that uses persistent optical MR-beacons to minimize repetitive operator inputs.
  • It employs a gesture-controlled interface via Hololens 2 with functionalities like Add, Move, Delete, and Select for effective beacon management.
  • Experimental comparisons demonstrate that MRNaB reduces command actions and enhances navigation efficiency over traditional 2D interfaces.

MRNaB: A Mixed Reality Interface for Intuitive Robot Navigation

Introduction to MRNaB

Recent advancements in robotics have underscored the importance of interfaces that enable intuitive robot navigation by operators. While traditional approaches have relied heavily on 2D displays, these interfaces often require operators to divert attention between the display and the robot's physical environment. The novel system proposed in this research, the Mixed reality-based robot navigation interface using an optical-see-through MR-beacon (MRNaB), leverages Mixed Reality (MR) technology to enhance the intuitiveness of robot navigation tasks. MRNaB introduces a novel concept of a persistent MR-beacon situated atop the real-world environment, functioning as a dynamic signal transmitter for robot navigation. This system is designed to improve the efficiency of robot navigation by making the interface more visually intuitive and reducing the need for repetitive navigation inputs.

MRNaB's Design and Functionality

The design of MRNaB focuses on user interaction through an MR interface, facilitated by the Hololens~2, supporting basic functions such as "Add", "Move", "Delete", and "Select" for navigation beacons. This functionality allows users to dynamically interact with the navigation system in mixed reality, significantly enhancing the effectiveness of robot navigation compared to traditional 2D systems. One of the key advantages of MRNaB is the persistence of MR-beacons, which eliminates the necessity for operators to input navigation points repeatedly for frequently visited destinations. This feature is particularly beneficial in environments requiring repetitive navigation tasks, such as delivery tasks in home or office settings.

System Implementation and Experiments

The MRNaB system leverages ROS~2 for robot interface and supports seamless operation through gestures using the Hololens~2. A significant aspect of the system's implementation involves the innovative use of MR technology to co-localize the robot and the interface in physical space, enhancing spatial understanding and operational efficiency. Comparative experiments with traditional 2D systems have demonstrated MRNaB's superiority in facilitating more effective robot navigation, as evidenced by reduced actions required for navigation and increased efficiency in reaching targeted locations.

Implications and Future Directions

The research on MRNaB opens new avenues for the development of intuitive interfaces in robot navigation, emphasizing the potential of MR technology in enhancing human-robot interaction. The findings suggest that MRNaB not only improves operational efficiency but also enhances the user experience in robot navigation tasks. Future research could explore the integration of more advanced MR capabilities and the application of MRNaB in more complex navigation scenarios, potentially expanding its use to various industrial and commercial settings.

Conclusion

MRNaB represents a significant advancement in the field of robot navigation interfaces by integrating MR technology to improve both the intuitiveness and efficiency of navigation tasks. By allowing operators to interact with a persistent, three-dimensional navigation beacon in a mixed reality environment, MRNaB offers a more natural and effective approach to robot navigation than traditional 2D interfaces. The system's ability to reduce the cognitive load on operators and streamline navigation processes holds promise for broader applications in robotics and automation.

Github Logo Streamline Icon: https://streamlinehq.com