Mobirobot: Mobile Robotic Systems
- Mobirobot is a term for integrated mobile robotic systems combining embedded sensing, networking, actuation, and computation for autonomous and teleoperated operation.
- These systems span applications such as remote teleoperation, education, industrial inspection, and healthcare, employing modular hardware and sophisticated software frameworks.
- Recent research focuses on enhancing real-time control, sensor fusion, and communication protocols to improve autonomy and safe interaction in complex, dynamic environments.
Mobirobot refers broadly to mobile robotic systems that integrate embedded sensing, networking, actuation, and computation capabilities for dynamic interaction with remote or physical environments; the term has been applied across distributed teleoperation testbeds, low-cost educational robots, networked industrial and research platforms, healthcare assistants, and competition-grade autonomous systems. Architectures and technical specifications vary by application, but the unifying theme is modular, transportable mobile robots with remote and/or autonomous operation enabled by diverse hardware and software stacks. Leading exemplars span platforms engineered for Internet-based teleoperation, open-source research and education, home care, competition, industrial inspection, and social-human interaction.
1. Historical Context and Platform Taxonomy
Mobirobot research emerged in the context of expanding network connectivity, robotics miniaturization, and rapid advances in embedded processing power. The term encompasses:
- Internet telerobots: Early Mobirobot systems (e.g., the Multi-Sensor Smart Robot, MSSR) demonstrated real-time teleoperation via cellular WAN connectivity, robust onboard sensing, and modular software for research on network-aware control and remote user interfaces (Duong et al., 2016).
- Open-source educational and industrial-grade robots: Subsequent platforms (e.g., SMARTmBOT, ROMR) emphasized affordability, modularity, and accessible software (ROS or custom stacks), aiming to democratize research and prototyping (Jo et al., 2022, Linus et al., 2022).
- Social, healthcare, and assistive robots: Variants such as the Mobirobot deployed for pediatric therapy, or the Moby standing support robot, focus on human interaction, safety, and real-world acceptability within clinical workflows (Dyck et al., 14 Jan 2026, ManrÃquez-Cisterna et al., 27 Aug 2025).
- Competition and agile robotics: Other Mobirobot examples mirror Small Size League (SSL) spec robots for AI/robotics competitions, leveraging high-mobility omni-drive platforms, real-time multi-robot coordination, and integrated sensor fusion (Pereira et al., 2022).
- Inspection and reconfigurable robots: Highly adaptive Mobirobot designs like MIRRAX enable navigation in extreme or confined environments thanks to kinematic reconfiguration and omnidirectional drive (Cheah et al., 2022).
- Commodity hardware and smartphones: Recent work exploits the sensing and communications of commercial Android smartphones as the core "brain" of a Mobirobot, extending capabilities to affordable, portable robot control and data logging (Najafabadi et al., 2024).
2. Mechanical and Sensor Architectures
Mobirobot implementations span a broad spectrum of hardware architectures, reflecting heterogeneity in actuation, chassis design, and sensing.
- Drive and Locomotion: Topologies include omnidirectional mecanum or omni-wheeled robots (for holonomic or X-drive mobility (Ahamad et al., 2024, Pereira et al., 2022)), differential-drive designs (SMARTmBOT, ROMR (Jo et al., 2022, Linus et al., 2022)), articulated legged morphologies (MOBIUS (Schperberg et al., 3 Nov 2025)), and specialty support structures (e.g., sit-to-stand columns in Moby (ManrÃquez-Cisterna et al., 27 Aug 2025)), with actuation delivered by precision DC/brushed or BLDC motors, sometimes through high-torque gearboxes.
- Sensing: Sensor payloads typically combine proprioceptive states (wheel odometry, joint encoders, IMUs) and exteroceptive feedback (ToF/LiDAR, sonar, vision, GPS, encoders), with system integration enabling 2D/3D SLAM, obstacle detection, pose estimation, and environmental monitoring (Duong et al., 2016, Jo et al., 2022, Ahamad et al., 2024).
- Control and Networking Hardware: Platforms are equipped with onboard microcontrollers (e.g., Arduino, dsPIC, OpenCR), embedded PCs (e.g., Jetson Nano, Raspberry Pi, industrial PCs), and wireless modules (Wi-Fi, Bluetooth, cellular modems) providing interfaces for both local autonomy and remote human-in-the-loop interaction (Duong et al., 2016, Pereira et al., 2022, Najafabadi et al., 2024).
Table: Representative Mobirobot Sensing and Actuation Configurations
| Platform | Mobility Type | Sensors | Comms |
|---|---|---|---|
| MSSR (Duong et al., 2016) | 3-motor drive, PTZ | Sonars, LRF, PTZ | 3G, RS-485 |
| SMARTmBOT | Diff-drive, chain | ToF x8, camera | Wi-Fi, ROS 2 |
| Mobirobot (SSL) | 4-omni X-drive | IMU, Encoders | Wi-Fi, ROS |
| ROMR | Hoverboard-BLDC | LiDAR, IMU, RGB-D | Wi-Fi, RC, ROS |
| Omobot | 4-mecanum, ROS | LiDAR, Camera | Wi-Fi, BT, Email |
3. Software Frameworks and Architecture
Mobirobot platforms leverage layered software architectures that integrate real-time motor/sensor control, network connectivity, perception, planning, and user interfaces.
- Low-Level Control: Embedded real-time loops (PID, admittance, or custom regulators) close control of actuators and process sensor data at high rates (up to 500 Hz) (Duong et al., 2016, Pereira et al., 2022, Schperberg et al., 3 Nov 2025).
- Middleware and Protocols: Systems employ ROS (Robot Operating System; versions 1, 2, and micro-ROS), custom client-server APIs (REST, WebSocket), classical interprocess communication, or web-based management. Multi-protocol stacks (TCP/UDP/RTP) manage administrative, command, telemetry, and media streams (Duong et al., 2016, 0812.0070, Linus et al., 2022).
- Perception and Autonomy: Integration of SLAM (Gmapping, Cartographer, Hector, RTAB-Map), deep learning for perception (YOLOv8-pose for fall detection), sensor fusion (EKF), and online mapping enable autonomous behavior and environmental adaptation (Ahamad et al., 2024, Linus et al., 2022, Najafabadi et al., 2024).
- User Interfaces: Interfaces include browser-based dashboards, manual joysticks, mobile apps, VR overlays, regiment editors, and direct programmatic APIs; user inputs can range from low-level drive to scripted regimen management or teleoperation (0812.0070, Dyck et al., 14 Jan 2026, ManrÃquez-Cisterna et al., 27 Aug 2025).
4. Autonomy, Teleoperation, and Safety Mechanisms
Mobirobot platforms integrate onboard intelligence for safety, mixed-initiative control, and resilience to communication interruptions.
- Autonomy Mechanisms: Examples include fuzzy-logic obstacle avoidance and safe-point return in MSSR (Duong et al., 2016), reinforcement learning-based multi-modal locomotion and Mixed-Integer Quadratically Constrained Programming (MIQCP) for planning in MOBIUS (Schperberg et al., 3 Nov 2025), reactive and model-based controllers for navigation, and rules-based/ML fall detection in home care robots (Ahamad et al., 2024).
- Teleoperation: Client-server or peer-to-peer architectures accept remote control commands (velocity, heading, regimen, etc.), with video and telemetry feedback provided for supervisory control (Duong et al., 2016, Demir et al., 2021).
- Safety Strategies: Approaches include real-time fuzzy supervisors for collision avoidance and network dropout recovery (Duong et al., 2016), onboard admittance with reference governors for safe manipulation (Schperberg et al., 3 Nov 2025), and closed-loop mechanical braking and e-stop circuits in assistive robots (ManrÃquez-Cisterna et al., 27 Aug 2025).
5. Experimental Characterization and Practical Use Cases
Rigorous empirical evaluation underpins Mobirobot development. Performance metrics vary by context:
- Networked Teleoperation: MSSR reports 310–645 ms round-trip sensor-command delay over public Internet, 1.8–2.2 s video latency, and sub-decimeter teleop path accuracy (Duong et al., 2016).
- Navigation and Autonomy: Differential-drive robots achieve cm-level path-tracking and <0.1 m odometry drift using SLAM-based correction (Jo et al., 2022, Ahamad et al., 2024).
- Payload and Power: Large-scale robots (ROMR) move payloads up to 90 kg with 8 h endurance at <\$1,500 hardware cost (Linus et al., 2022), while educational platforms focus on modularity and low-power operation.
- Human Interaction: Pediatric Mobirobot shows high engagement and satisfaction in feasibility studies, with positive stakeholder feedback and continuous iterative refinement of user experience (Dyck et al., 14 Jan 2026). Standing support robots (Moby) halve task time and reduce NASA-TLX cognitive/physical demand compared to wheelchairs (ManrÃquez-Cisterna et al., 27 Aug 2025).
- Robustness: MIRRAX demonstrates ingress through 150 mm ports, omnidirectional navigation, and operational resilience in legacy nuclear facilities (Cheah et al., 2022).
6. Applications and Current Research Directions
Mobirobot systems enable a diverse set of research and industrial applications:
- Networked robotics and teleoperation research: Experimental platforms for protocol, HRI, and control law validation (Duong et al., 2016, 0812.0070).
- Human support and rehabilitation: Socially assistive robots for therapy, mobility aids for elderly/disabled, and patient monitoring (Dyck et al., 14 Jan 2026, ManrÃquez-Cisterna et al., 27 Aug 2025, Ahamad et al., 2024).
- Inspection and hazardous environments: Adaptive robots for nuclear facility surveys and other extreme locations (Cheah et al., 2022).
- Swarm and multi-agent systems: Both physical and simulated Mobirobots are used for distributed control, rendezvous, and coverage problems (Jo et al., 2022).
- Low-cost research/education: Open-source, customizable Mobirobots for teaching, algorithm benchmarking, and rapid prototyping (Linus et al., 2022, Jo et al., 2022).
- Agile and multi-modal mobility: Multi-modal robots for urban scouting, climbing, and terrain adaptation (Schperberg et al., 3 Nov 2025).
- Consumer electronics and mobile device integration: Use of commodity smartphones as anchor points for data collection, VIO/SLAM, and wireless control (Najafabadi et al., 2024).
7. Challenges and Future Work
Mobirobot research continues to address challenges in robustness, real-time performance, sensor integration, human-robot interaction, and standardization. Open problems include:
- Context-aware autonomy: Integrating adaptive/flexible autonomy levels (e.g., in therapy or public settings) (Dyck et al., 14 Jan 2026).
- Sensor fusion reliability: Handling occlusion, variable lighting, and noisy measurement streams in clinical and real-world spaces.
- Scalability and modularity: Achieving zero-code, plug-and-play extension for new sensors/actuators without sacrificing performance (0812.0070, Jo et al., 2022).
- Security and privacy: End-to-end encryption for control/videos in health and security applications (Demir et al., 2021, Najafabadi et al., 2024).
- Long-term experiments and field deployment: Comprehensive field trials, longitudinal studies, and open dataset publication for reproducibility and benchmarking (Linus et al., 2022, ManrÃquez-Cisterna et al., 27 Aug 2025).
Mobirobot platforms serve as foundational tools for ongoing investigations in distributed, context-aware, and human-centered robotics, enabling rigorous exploration of teleoperation, autonomy, and interaction far beyond traditional laboratory settings.