MRReP: Drawing Robot Paths in Thin Air

This presentation explores MRReP, a mixed reality interface that revolutionizes how humans specify navigation paths for mobile robots. By enabling users to draw paths directly on the physical floor using hand gestures through a HoloLens headset, MRReP eliminates the cognitive gap inherent in traditional 2D map-based interfaces. The system integrates seamlessly with ROS 2 navigation, allowing robots to follow user-drawn paths with remarkable fidelity. Experimental results demonstrate substantial improvements in path accuracy, usability, and reproducibility compared to conventional GUI approaches, suggesting a fundamental shift in human-robot spatial communication.
Script
When you tell a robot where to go using a map on a screen, you're asking your brain to perform an invisible translation between what you see in the environment and a two-dimensional grid of pixels. That mental gap is where errors creep in, where your spatial intentions get lost in coordinate systems and mouse clicks.
MRReP collapses that gap entirely. Wearing a HoloLens headset, you extend your hand, pinch your fingers, and draw the exact path you want the robot to take, right there on the actual floor. The system captures those waypoints, transmits them to the robot's navigation stack, and the robot follows your hand-drawn trajectory as its global path.
The elegance lies in how this spatial input flows from your gesture into autonomous execution.
The authors ran a controlled experiment with 16 participants drawing paths in two environments—one straight, one with multiple 45-degree turns. In the geometrically complex scenario, traditional GUI users achieved only 52.9% precision, with their paths frequently drifting outside the target region. MRReP users maintained 83.6% precision, and nearly 100% of their drawn paths stayed within the ground-truth corridor.
This visualization reveals the dramatic difference in path stability. Each line represents one participant's result. The 2D interface produces a spray of trajectories with substantial endpoint deviation, while the mixed reality paths cluster tightly around the ground-truth tape. The robot's actual trajectories mirror this pattern—MRReP users don't just draw better paths, they produce paths the robot can execute more faithfully.
The numbers tell only part of the story. Users rated MRReP 47% higher on usability and reported significantly lower mental workload, even though drawing in mixed reality sometimes took longer. That apparent paradox dissolves when you consider that MR makes errors visible immediately—you see your path deviation in real-time, in the actual space. The 2D interface hides those mistakes until after the robot moves.
What MRReP demonstrates is that the shortest distance between human intent and robot action isn't through a screen—it's through the physical space you both inhabit. To explore more research at the frontier of human-robot interaction and create your own video presentations, visit EmergentMind.com.