Sharif Mechatronic Lab Exoskeleton (SMLE)
- SMLE is a modular, wearable robotic exoskeleton integrating soft sensors, pneumatic actuation, and adaptive control for enhanced human motor function.
- Its design features lightweight mechanical frames, multimodal sensor fusion, and deep learning-driven intent prediction to ensure precise and energy-efficient movement.
- Rigorous evaluations using comparative studies, multicriteria optimization, and open-source toolkits demonstrate its practical applications in rehabilitation and strength augmentation.
The Sharif Mechatronic Lab Exoskeleton (SMLE) is an intelligent, modular, wearable robotic system developed for human motor augmentation and rehabilitation, with research contributions spanning upper- and lower-limb devices. SMLE leverages advances in soft wearable sensors, pneumatic actuation, deep learning, intent-driven control, and analytical design for both strength assistance and adaptive motion prediction. Its design and evaluation are grounded in rigorous comparative studies, multicriteria mechanical synthesis, and multimodal physiological monitoring.
1. System Architecture and Core Components
SMLE encompasses upper-limb and lower-limb exoskeleton platforms, unified by several core architectural elements:
- Mechanical Frame: Lightweight structures, primarily carbon fiber with aluminum connectors for upper-limb editions, and compact 6-link Stephenson III–type linkages for lower limbs. The lower-limb mechanism emphasizes anatomical compatibility via multicriteria-optimized geometry and reduced joint count (Ibrayev et al., 2023).
- Actuation: Upper-limb SMLE employs soft pneumatic artificial muscles (PAMs; up to 897 N force, 87 mm displacement), offering compliant, biomimetic motion for flexion/extension at shoulder and elbow. The lower-limb variant reduces active joint count through kinematic synthesis, favoring energy efficiency and simplified actuation.
- Sensor Integration: Custom “skin-like” soft wireless EMG patches, integrated IMUs, and force sensors provide high-fidelity real-time biofeedback. For lower-limb platforms, sensor suites may include 3D-printed insole arrays and crutch-mounted load cells for comprehensive biomechanical data capture (Marinou et al., 2 Sep 2024).
- Data Handling and Connectivity: Real-time sensor data are streamed to a cloud-based or on-board computational system via BLE. The central processor enables intent prediction, phase estimation, and exoskeleton control using deep neural networks or fuzzy logic frameworks as appropriate.
- Human–Robot Interface: Modular GUIs allow monitoring, manual override, and visualization of system activity.
This modularity and sensor fusion strategy allow tailored adaptation to user needs, robust biomechanical monitoring, and flexible expansion.
2. Intent Prediction and Adaptive Control Strategies
SMLE incorporates state-of-the-art cognitive human–robot interaction (HRI) paradigms:
- Deep Learning-Based Intention Decoding: Real-time EMG signals, preprocessed with bandpass and notch filtering, are segmented (e.g., 1-s windows) and input to a hybrid CNN-LSTM. This architecture detects intended upper-limb motions with ∼96% classification accuracy and a system response time of 500–550 ms (Lee et al., 2023). ADAM optimization, Leaky ReLU activation, and dropout regularization are used for robust online learning.
- Meta-Learning for Task Adaptation: Lower-limb and upper-limb versions can employ model-agnostic meta-learning (MAML), training task-specific neural networks for rapid adaptation to novel subjects and tasks from few-shot gait data. The adapted network outputs joint angle trajectories, which are tracked by a gravity-compensated PD controller. This structure yields fast personalization and improved generalization in task-variant settings, reducing muscle activation and metabolic cost in new users and tasks (Ma et al., 17 Sep 2025).
- Real-Time Joint Angle Prediction: For ambulation and rehabilitation contexts, an attention-based CNN-LSTM predicts knee angles using EMG, kinematics, and interaction forces. Transfer learning from large cross-lab datasets enables rapid fine-tuning—only a few gait cycles are required for SMLE-specific deployment—with 1.09% NMAE for one-step knee predictions and 3.1% for 50-step predictions (Mollahossein et al., 15 Oct 2025).
- Robust Locomotion Control: For non-steady gait scenarios (ramp, rough terrain), a shank angle–based dual-Gaussian profile, updated on every stride via IMU, serves as the reference for a model-based feedforward controller. This approach synchronizes exoskeleton force delivery with biological plantar flexion torques under variable phase and perturbation, maintaining high kinematic correlation across gaits (Tan et al., 13 Aug 2025).
3. Physical and Cognitive Human–Robot Interaction
The SMLE design directly addresses both physical and cognitive interfaces:
- Physical HRI: Mechanical designs rigorously match human anatomy through parameterized synthesis (Ibrayev et al., 2023), minimize added mass/inertia, employ compliant actuators (such as SEA and PAMs), and optimize strength-to-weight via composite materials (Nazari et al., 2021). Coupled dynamics are quantified using detailed neuromusculoskeletal models, facilitating “transparent” support with minimized misalignment and user discomfort (Jin et al., 2023).
- Cognitive HRI: Real-time intent prediction is achieved through sensor fusion (EMG, IMU, force), feature extraction (time/frequency domains), and deep/transfer learning. Human-in-the-loop and shared control models are emphasized—authority shifts between user and exoskeleton based on sensor feedback and state arbitration (Nazari et al., 2021). AI-based decision-making allows dynamic adjustment of assistance and safety in unpredictable or strenuous tasks.
4. Soft Sensing, Multimodality, and Biomechanical Feedback
A layered, physiologically informed sensing platform extends SMLE's capabilities:
- Multimodal Sensing: Integration of textile sEMG, ultrasensitive cutaneous strain sensors (printed graphene microcracks), and compact IMUs creates a multilayer architecture targeting muscular, cutaneous, and skeletal signals simultaneously (Tang et al., 16 Aug 2025).
- Personalized Assistance and Risk Detection: Joint moments are estimated with RMSE = 0.13 Nm/kg; classification of metabolic trends is achieved with accuracy = 97.1%; and injury risk is detected within 100 ms at 96.4% recall, all validated under strict leave-one-out protocols. This enables real-time, personalized exoskeleton control and ultra-fast safety responses.
- Rehabilitation Monitoring: Quantitative scoring (e.g., automated Fugl-Meyer Assessment) and adaptation of controller parameters from patient-specific physiological states support optimized, responsive rehabilitation (Halder et al., 2023).
5. Mechanical and Kinematic Optimization
Lower-limb SMLE development utilizes rigorous multicriteria optimization:
- Analytical Synthesis: The exoskeleton's 6-bar (Stephenson III–type) linkage is analytically synthesized to minimize foot trajectory deviation, maximize force transmission, maintain anatomical ratios, and control swing phase height (Ibrayev et al., 2023). The optimization objective combines trajectory accuracy, chassis height, transference angles, anatomic matching, penalties for “external” foot transfer, and swing height.
- Performance Metrics: Key metrics include average foot path error (<0.05), biomechanical force transmission efficiency, minimized exoskeleton mass through link minimization, and realistic human gait replication.
Such mechanistic rigor reduces the need for direct motorization at each joint—promoting energy efficiency and maintenance simplicity, while enabling naturalistic movement support.
6. Simulation, Evaluation, and Open-Source Toolchains
Evaluation of the SMLE system and its submodules is grounded in advanced modeling and open validation:
- Dynamic Simulation: Optimization-based simulation frameworks integrate neuromusculoskeletal feedback loops, multi-rigid-body dynamics, and exoskeleton interaction models (Jin et al., 2023). Two-phase parameter identification using CMA-ES enables tuning for performance criteria such as speed, torque profiles, ground reaction forces, and metabolic cost.
- Modular Sensing Toolkits: Sensorized crutch and insole systems, validated against motion capture and force plate “gold standards,” enable portable, field-compatible biomechanical feedback (Marinou et al., 2 Sep 2024). Fuzzy logic algorithms, calibrated via sigmoid membership functions, permit robust real-time gait phase detection (RMSE ~28 ms), with open-source hardware and software designs available.
- Comparative Validation and Datasets: SMLE’s adaptability is benchmarked via transfer learning on cross-institution datasets, empirical studies of real-world operation, and gold-standard comparisons in motion prediction and control.
7. Applications, Clinical Relevance, and Future Directions
SMLE stands at the intersection of rehabilitation, assistive augmentation, research, and industrial support:
- Rehabilitation: Provides patient-specific neuromuscular training, real-time feedback, and automated adjustment, particularly for post-stroke and upper-limb impairment scenarios (Halder et al., 2023).
- Strength Augmentation: Empirical studies demonstrate up to 5.15-fold increase in upper-limb output for users compared to unassisted conditions, achieved via intent-driven PAM actuation (Lee et al., 2023).
- Personalization and Generalization: Rapid adaptation to novel users/tasks via meta-learning, transfer learning, and few-shot calibration enables individualized support and high generalization across operating contexts (Ma et al., 17 Sep 2025Mollahossein et al., 15 Oct 2025).
- Gait Assistance and Control Robustness: Dual-Gaussian, IMU-driven control dynamics allow robust assistance in non-steady locomotion, with reduced metabolic cost and muscle effort across variable terrains and perturbations (Tan et al., 13 Aug 2025).
- Open-Source Dissemination: Sensor platforms and computational frameworks are released open-source to enable reproducibility, rapid innovation, and deployment beyond laboratory confines (Marinou et al., 2 Sep 2024).
Future work is suggested in the papers toward: integrating deeper on-device learning for complex gaits, extending layered sensing to diverse tasks, bridging simulation-reality gaps, and refining autonomous adaptation for broader clinical and non-clinical exoskeleton applications.
The Sharif Mechatronic Lab Exoskeleton exemplifies the convergence of advanced sensing, biomechanically-informed synthesis, adaptive control, and empirical rigor in exoskeleton research, serving as both an experimental platform and a benchmark for intelligent, adaptable, and clinically relevant wearable robotics.