A Weight-aware-based Multi-source Unsupervised Domain Adaptation Method for Human Motion Intention Recognition (2404.15366v2)
Abstract: Accurate recognition of human motion intention (HMI) is beneficial for exoskeleton robots to improve the wearing comfort level and achieve natural human-robot interaction. A classifier trained on labeled source subjects (domains) performs poorly on unlabeled target subject since the difference in individual motor characteristics. The unsupervised domain adaptation (UDA) method has become an effective way to this problem. However, the labeled data are collected from multiple source subjects that might be different not only from the target subject but also from each other. The current UDA methods for HMI recognition ignore the difference between each source subject, which reduces the classification accuracy. Therefore, this paper considers the differences between source subjects and develops a novel theory and algorithm for UDA to recognize HMI, where the margin disparity discrepancy (MDD) is extended to multi-source UDA theory and a novel weight-aware-based multi-source UDA algorithm (WMDD) is proposed. The source domain weight, which can be adjusted adaptively by the MDD between each source subject and target subject, is incorporated into UDA to measure the differences between source subjects. The developed multi-source UDA theory is theoretical and the generalization error on target subject is guaranteed. The theory can be transformed into an optimization problem for UDA, successfully bridging the gap between theory and algorithm. Moreover, a lightweight network is employed to guarantee the real-time of classification and the adversarial learning between feature generator and ensemble classifiers is utilized to further improve the generalization ability. The extensive experiments verify theoretical analysis and show that WMDD outperforms previous UDA methods on HMI recognition tasks.
- P. Huang, Z. Li, M. Zhou, and Z. Kan, “Divergent component of motion planning and adaptive repetitive control for wearable walking exoskeletons,” IEEE Trans. Cybern., vol. 54, no. 4, pp. 2244–2256, 2024.
- G. Li, L. Cheng, Z. Gao, X. Xia, and J. Jiang, “Development of an untethered adaptive thumb exoskeleton for delicate rehabilitation assistance,” IEEE Trans. Robot., vol. 38, no. 6, pp. 3514–3529, 2022.
- G. Li, X. Liang, H. Lu, T. Su, and Z.-G. Hou, “Development and validation of a self-aligning knee exoskeleton with hip rotation capability,” IEEE Trans. Neural Syst. Rehabilitation Eng., vol. 32, pp. 472–481, 2024.
- G. Li, Z. Li, C.-Y. Su, and T. Xu, “Active human-following control of an exoskeleton robot with body weight support,” IEEE Trans. Cybern., vol. 53, no. 11, pp. 7367–7379, 2023.
- Z. Li, T. Zhang, P. Huang, and G. Li, “Human-in-the-loop cooperative control of a walking exoskeleton for following time-variable human intention,” IEEE Trans. Cybern., vol. 54, no. 4, pp. 2142–2154, 2024.
- D. D. Molinaro, I. Kang, and A. J. Young, “Estimating human joint moments unifies exoskeleton control, reducing user effort,” Sci. Robot., vol. 9, no. 88, p. eadi8852, 2024.
- X. Hao, Z. Li, P. Huang, P. Shi, and G. Li, “Hierarchical task-oriented whole-body locomotion of a walking exoskeleton using adaptive dynamic motion primitive for cart pushing,” IEEE Trans. Autom. Sci. Eng., pp. 1–12, 2023.
- X. Yu, W. He, Y. Li, C. Xue, J. Li, J. Zou, and C. Yang, “Bayesian estimation of human impedance and motion intention for human–robot collaboration,” IEEE Trans. Cybern., vol. 51, no. 4, pp. 1822–1834, 2021.
- L.-L. Li, G.-Z. Cao, H.-J. Liang, Y.-P. Zhang, and F. Cui, “Human lower limb motion intention recognition for exoskeletons: A review,” IEEE Sens. J, vol. 23, no. 24, pp. 30 007–30 036, 2023.
- S. Qiu, W. Guo, D. Caldwell, and F. Chen, “Exoskeleton online learning and estimation of human walking intention based on dynamical movement primitives,” IEEE Trans. Cogn. Develop. Syst., vol. 13, no. 1, pp. 67–79, 2021.
- K. Gui, U.-X. Tan, H. Liu, and D. Zhang, “Electromyography-driven progressive assist-as-needed control for lower limb exoskeleton,” IEEE Trans. Med. Robot. Bionics, vol. 2, no. 1, pp. 50–58, 2020.
- G. Durandau, D. Farina, and M. Sartori, “Robust real-time musculoskeletal modeling driven by electromyograms,” IEEE. Trans. Biomed. Eng., vol. 65, no. 3, pp. 556–564, 2018.
- J. Liu, X. Zhou, B. He, P. Li, C. Wang, and X. Wu, “A novel method for detecting misclassifications of the locomotion mode in lower-limb exoskeleton robot control,” IEEE Robot. Autom. Lett., vol. 7, no. 3, pp. 7779–7785, 2022.
- Z. Yin, J. Zheng, L. Huang, Y. Gao, H. Peng, and L. Yin, “Sa-svm-based locomotion pattern recognit. for exoskeleton robot,” Appl. Sci., vol. 11, no. 12, p. 5573, 2021.
- C. Wang, Z. Guo, S. Duan, B. He, Y. Yuan, and X. Wu, “A real-time stability control method through semg interface for lower extremity rehabilitation exoskeletons,” Front. Neurosci., vol. 15, p. 645374, 2021.
- K. Zhang, J. Chen, J. Wang, Y. Leng, C. W. de Silva, and C. Fu, “Gaussian-guided feature alignment for unsupervised cross-subject adaptation,” Pattern Recognit., vol. 122, p. 108332, 2022.
- K. Zhang, J. Chen, J. Wang, X. Chen, Y. Leng, C. W. de Silva, and C. Fu, “Ensemble diverse hypotheses and knowledge distillation for unsupervised cross-subject adaptation,” Inf. Fusion, vol. 93, pp. 268–281, 2023.
- K. Zhang, J. Wang, C. W. de Silva, and C. Fu, “Unsupervised cross-subject adaptation for predicting human locomotion intent,” IEEE Trans. Neural Syst. Rehabilitation Eng., vol. 28, no. 3, pp. 646–657, 2020.
- Z. Mo, Z. Zhang, Q. Miao, and K.-L. Tsui, “Sparsity-constrained invariant risk minimization for domain generalization with application to machinery fault diagnosis modeling,” IEEE Trans. Cybern., vol. 54, no. 3, pp. 1547–1559, 2024.
- Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky, “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 17, no. 59, pp. 1–35, 2016.
- M. Long, Z. Cao, J. Wang, and P. S. Yu, “Learning multiple tasks with multilinear relationship networks,” in Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- K. Saito, K. Watanabe, Y. Ushiku, and T. Harada, “Maximum classifier discrepancy for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3723–3732.
- Y. Zhang, T. Liu, M. Long, and M. Jordan, “Bridging theory and algorithm for domain adaptation,” in Proc. Int. Conf. Mach. Learn. PMLR, 2019, pp. 7404–7413.
- Y. Zhang, B. Deng, H. Tang, L. Zhang, and K. Jia, “Unsupervised multi-class domain adaptation: Theory, algorithms, and practice,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 5, pp. 2775–2792, 2022.
- N. Xiao and L. Zhang, “Dynamic weighted learning for unsupervised domain adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2021, pp. 15 242–15 251.
- K. Li, J. Lu, H. Zuo, and G. Zhang, “Multidomain adaptation with sample and source distillation,” IEEE Trans. Cybern., vol. 54, no. 4, pp. 2193–2205, 2024.
- Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proc. Int. Conf. Mach. Learn. PMLR, 2015, pp. 1180–1189.
- M. Long, Y. Cao, J. Wang, and M. Jordan, “Learning transferable features with deep adaptation networks,” in Proc. Int. Conf. Mach. Learn. PMLR, 2015, pp. 97–105.
- N. Courty, R. Flamary, A. Habrard, and A. Rakotomamonjy, “Joint distribution optimal transportation for domain adaptation,” Proc. Adv. Neural Inf. Process. Syst., vol. 30, 2017.
- J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman, “Learning bounds for domain adaptation,” in Proc. Adv. Neural Inf. Process. Syst., vol. 20, 2007.
- H. Zhao, S. Zhang, G. Wu, J. M. Moura, J. P. Costeira, and G. J. Gordon, “Adversarial multiple source domain adaptation,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018.
- Y. Li, D. E. Carlson et al., “Extracting relationships by multi-domain matching,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018.
- Z. Chen, Y. Liao, J. Li, R. Huang, L. Xu, G. Jin, and W. Li, “A multi-source weighted deep transfer network for open-set fault diagnosis of rotary machinery,” IEEE Trans. Cybern., vol. 53, no. 3, pp. 1982–1993, 2023.
- L. Ren and X. Cheng, “Single/multi-source black-box domain adaption for sensor time series data,” IEEE Trans. Cybern., 2023.
- S. Wang, B. Wang, Z. Zhang, A. A. Heidari, and H. Chen, “Class-aware sample reweighting optimal transport for multi-source domain adaptation,” Neurocomputing, vol. 523, pp. 213–223, 2023.
- J. Bruinsma and R. Carloni, “Imu-based deep neural networks: Prediction of locomotor and transition intentions of an osseointegrated transfemoral amputee,” IEEE Trans. Neural Syst. Rehabilitation Eng., vol. 29, pp. 1079–1088, 2021.
- J. Yang and Y. Yin, “Novel soft smart shoes for motion intent learning of lower limbs using lstm with a convolutional autoencoder,” IEEE Sens. J, vol. 21, no. 2, pp. 1906–1917, 2021.
- N. Sun, M. Cao, Y. Chen, Y. Chen, J. Wang, Q. Wang, X. Chen, and T. Liu, “Continuous estimation of human knee joint angles by fusing kinematic and myoelectric signals,” IEEE Trans. Neural Syst. Rehabilitation Eng., vol. 30, pp. 2446–2455, 2022.
- S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of representations for domain adaptation,” in Proc. Adv. Neural Inf. Process. Syst., vol. 19, 2006.
- S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Mach. Learn., vol. 79, pp. 151–175, 2010.
- Y. Mansour, M. Mohri, and A. Rostamizadeh, “Domain adaptation: Learning bounds and algorithms,” arXiv preprint arXiv:0902.3430, 2009.
- S. Kuroki, N. Charoenphakdee, H. Bao, J. Honda, I. Sato, and M. Sugiyama, “Unsupervised domain adaptation based on source-guided discrepancy,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 4122–4129.
- J. Wen, R. Greiner, and D. Schuurmans, “Domain aggregation networks for multi-source domain adaptation,” in Proc. Int. Conf. Mach. Learn. PMLR, 2020, pp. 10 214–10 224.
- Y. Yao, X. Li, Y. Zhang, and Y. Ye, “Multisource heterogeneous domain adaptation with conditional weighting adversarial network,” IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 4, pp. 2079–2092, 2023.
- Z.-G. Liu, L.-B. Ning, and Z.-W. Zhang, “A new progressive multisource domain adaptation network with weighted decision fusion,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 1, pp. 1062–1072, 2024.
- S. Chen, L. Zheng, and H. Wu, “Riemannian representation learning for multi-source domain adaptation,” Pattern Recognit., vol. 137, p. 109271, 2023.
- S. Chen, “Multi-source domain adaptation with mixture of joint distributions,” Pattern Recognit., vol. 149, p. 110295, 2024.
- M. Mohri and A. Muñoz Medina, “New analysis and algorithm for learning with drifting distributions,” in Proc. Int. Conf. Algorithmic Learn. Theory. Springer, 2012, pp. 124–138.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., vol. 27, 2014.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020.
- B. Hu, E. Rouse, and L. Hargrove, “Benchmark datasets for bilateral lower-limb neuromechanical signals from wearable sensors during unassisted locomotion in able-bodied individuals,” Front. Robot. AI, vol. 5, p. 14, 2018.
- B. Barshan and M. C. Yüksek, “Recognizing daily and sports activities in two open source machine learning environments using body-worn sensor units,” Comput. J., vol. 57, no. 11, pp. 1649–1667, 2014.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” J. Mach. Learn. Res., vol. 9, no. 11, 2008.
- J. Wang, J. Chen, J. Lin, L. Sigal, and C. W. de Silva, “Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by gaussian-guided latent alignment,” Pattern Recognit., vol. 116, p. 107943, 2021.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.