- The paper demonstrates that acceleration-level Bayesian decoding reduces NMSE by 72% compared to AR and EEGNet baselines in continuous BCI pursuit tasks.
- The paper employs ARD priors to dynamically select relevant EEG features, ensuring robust real-time adaptation and computational efficiency.
- The paper supports embodied cognition theory by linking acceleration-based motor intent decoding with improved, intuitive assistive robot navigation in built environments.
Bayesian Learning of Embodied Dynamics for Continuous BCI-Based Assistive Robot Control
Introduction
This paper addresses a critical shortcoming in existing brain-computer interface (BCI)-driven mobile assistive robotics: the lack of robust, real-time continuous pursuit control mechanisms that allow end-users—specifically, individuals with severe motor impairments—to precisely and intuitively navigate in complex built environments. Most existing BCI wheelchair systems interpret electroencephalography (EEG) signals to produce discrete, pre-defined control commands, which fundamentally constrains agility and adaptability, especially in dynamic public contexts such as hospitals, transit stations, or airports.
The paper proposes and rigorously validates a Bayesian inference framework that decodes embodied dynamics—specifically acceleration-based motor intents—from non-invasive EEG data, representing a biologically plausible alternative to kinematics-level and deep learning-based approaches. The central hypothesis is grounded in embodied cognition, positing that human motor intentions are fundamentally structured in dynamical (force and acceleration) rather than kinematic (velocity or position) variables, and that decoding at this dynamic level yields superior functional control.
Background and Theoretical Rationale
Limitations of Conventional Modalities
Prior control interfaces, including voice and vision-based gesture systems, demonstrate significant vulnerability in noisy and occluded public settings and offer limited privacy. EEG-based BCIs circumvent these constraints by decoding neural signals independent of environmental confounds, providing both technical robustness and social discretion, directly supporting agency and dignity for end-users.
Motor Imagery Paradigms in BCI Control
EEG-based BCI research is dominated by two paradigms: (a) spontaneous signals from motor imagery (MI) and (b) evoked signals (such as P300 or SSVEP responses). While MI-based systems are well-studied, the prevailing focus has been on discrete control—selecting among limited operations or destinations—rather than supporting seamless, user-driven continuous control.
A further technical challenge is the pronounced non-stationarity of EEG signals, requiring continual recalibration and adaptation. Deep learning approaches such as EEGNet, while effective, are fundamentally limited by their deterministic learning protocols and superficial analogy to biological neural computation.
Embodied Cognition and Acceleration-Based Control
Empirical and theoretical models of human motor control emphasize the privileged role of dynamics—force and acceleration—as the principal internal variables encoded by the brain. This is supported by invariants such as the minimum-jerk model and physiological evidence that neural activity correlates more strongly with accelerative variables than with velocity or position.
The Bayesian brain hypothesis further frames perception and action as inferential processes under uncertainty: neural circuits continually update internal models to minimize prediction error, integrating noisy sensory evidence with prior expectations. This biological computation is fundamentally probabilistic and hierarchical, contrasting with the deterministic deep networks typically employed in BCI decoding.
Methods
Dataset and Experimental Design
Evaluation employed the Continuous Pursuit (CP) BCI benchmark dataset: sixteen hours of 64-channel EEG recordings from four subjects engaged in a motor imagery (kinesthetic MI) cursor tracking task. Subjects controlled a 2D cursor by imagining left/right/both hand movements, with labels comprising both velocity and acceleration signals derived from cursor kinematics.
Two experimental phases were leveraged: subject-specific cumulative learning, and cross-subject transfer learning with online adaptation. The principal metric was the Normalized Mean Squared Error (NMSE) between predicted and true velocities.
Model Architectures
Three regression decoders were compared:
- Bayesian Decoder: Implements Bayesian ridge regression, leveraging Gaussian priors (isotropic or Automatic Relevance Determination, ARD) and continual online updating of weights and feature relevance.
- Autoregressive (AR) Model: Traditional baseline with bandpower features mapped to target kinematics via ridge regression.
- EEGNet: Canonical deep learning baseline using convolutional neural network architecture, optimized for EEG regression via session-wise and transfer learning protocols.
Decoding was performed in both velocity and acceleration modes. In the latter, predicted accelerations were integrated in real-time to reconstruct velocities.
Results
Main Findings
Acceleration-Level Decoding Superiority: The Bayesian decoder, when operating in acceleration mode, achieved a 72% reduction in NMSE relative to both AR and EEGNet baselines in session-accumulative transfer learning settings (p<0.001; large effect size). No model differences were found under velocity mode (p=1.0).
Feature Selection with ARD: ARD priors further reduced NMSE compared to isotropic priors in acceleration decoding, substantiating the capacity to identify relevant neural features dynamically.
Model Efficiency: Bayesian adaptation required significantly less computational time compared to EEGNet training, rendering it more feasible for real-time, on-device deployment.
Stability Across Runs and Subjects: Bayesian acceleration decoding demonstrated consistent error suppression and stability across multiple runs and subjects, whereas AR and EEGNet exhibited higher error variance.
Theoretical Implications
Findings strongly endorse the embodied cognition hypothesis: motor intentions are not abstract kinematic commands but continual regulation of dynamic process variables. Notably, acceleration-space decoding yielded biological plausibility and enhanced tracking even in pure MI—absent physical execution—underscoring the persistence of embodiment in internal simulation contexts.
The utility of Bayesian inference for capturing uncertainty and probabilistic hierarchical structure aligns neural computation with the underlying principles of motor control. This is in marked contrast to conventional deep learning, which lacks explicit uncertainty representation and biological interpretability.
Implications for Assistive Robotics and Embodied AI
Practical Implications
This embodied Bayesian approach directly addresses the limitations of discrete BCI-driven wheelchair navigation, enabling smooth, responsive, and user-initiated motion control in dynamic environments. By decoding neural dynamics aligned with biological principles, system stability and user agency are enhanced, potentially reducing calibration burden and fostering trust in shared-control human-robot collaboration.
Theoretical and Future Directions
For robotics and digital twin frameworks, these results suggest acceleration-force-based models should be canonical—mirroring both overt physical control and internal simulated embodiment. This informs the development of embodied AI systems capable of functionally integrating with human cognition and sensorimotor models.
Future research should extend real-time experiments to physical platforms (e.g., robotic wheelchairs in built environments), refine hierarchical and regulatory trajectory control mechanisms, and integrate principled noise modeling within adaptive control architectures.
Conclusion
The paper empirically validates a brain-inspired Bayesian learning paradigm for continuous assistive robot control, demonstrating decisive performance gains and biological alignment when decoding acceleration-level motor imagery from EEG. These findings substantiate theories of embodied cognition and probabilistic brain computation, and demonstrate practical viability for real-time, adaptive, and trustworthy human-machine interfaces in assistive robotics. The proposed framework constitutes a foundational step toward embodied, flexible, and robust BCI control systems, with broad implications for digital twins, robotics, and the design of embodied AI.