Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Workload Estimation for Unknown Tasks: A Survey of Machine Learning Under Distribution Shift (2403.13318v1)

Published 20 Mar 2024 in cs.RO and cs.HC

Abstract: Human-robot teams involve humans and robots collaborating to achieve tasks under various environmental conditions. Successful teaming will require robots to adapt autonomously to a human teammate's internal state. An important element of such adaptation is the ability to estimate the human teammates' workload in unknown situations. Existing workload models use machine learning to model the relationships between physiological metrics and workload; however, these methods are susceptible to individual differences and are heavily influenced by other factors. These methods cannot generalize to unknown tasks, as they rely on standard machine learning approaches that assume data consists of independent and identically distributed (IID) samples. This assumption does not necessarily hold for estimating workload for new tasks. A survey of non-IID machine learning techniques is presented, where commonly used techniques are evaluated using three criteria: portability, model complexity, and adaptability. These criteria are used to argue which techniques are most applicable for estimating workload for unknown tasks in dynamic, real-time environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (200)
  1. J. Heard, R. Heald, C. Harriott, and J. Adams, “A diagnostic human workload assessment algorithm for collaborative and supervisory human–robot teams,” ACM Transactions on Human-Robot Interaction, vol. 8, no. 2, pp. 1–30, 2019.
  2. J. Delmerico, S. Mintchev, A. Giusti, B. Gromov, K. Melo, T. Horvat, C. Cadena, M. Hutter, A. Ijspeert, D. Floreano et al., “The current state and future outlook of rescue robotics,” Journal of Field Robotics, vol. 36, no. 7, pp. 1171–1191, 2019.
  3. D. Perez-Saura, M. Fernandez-Cortizas, R. Perez-Segui, P. Arias-Perez, and P. Campoy, “Urban firefighting drones: Precise throwing from uav,” Journal of Intelligent & Robotic Systems, vol. 108, no. 4, p. 66, 2023.
  4. M. Kaczorowska, M. Plechawska-Wójcik, and M. Tokovarov, “Interpretable machine learning models for three-way classification of cognitive workload levels for eye-tracking features,” Brain Sciences, vol. 11, no. 2, p. 210, 2021.
  5. U. Manawadu, T. Kawano, S. Murata, M. Kamezaki, J. Muramatsu, and S. Sugano, “Multiclass classification of driver perceived workload using long short-term memory based recurrent neural network,” in IEEE Intelligent Vehicles Symposium.   IEEE, 2018, pp. 1–6.
  6. L. Longo, C. Wickens, G. Hancock, and P. Hancock, “Human mental workload: A survey and a novel inclusive definition,” Frontiers in Psychology, vol. 13, p. 883321, 2022.
  7. N. Churamani, S. Kalkan, and H. Gunes, “Continual learning for affective robotics: Why, what and how?” in IEEE International Conference on Robot and Human Interactive Communication.   IEEE, 2020, pp. 425–431.
  8. H. Gao, M. Wu, Z. Chen, Y. Li, X. Wang, S. An, J. Li, and C. Liu, “Ssa-icl: Multi-domain adaptive attention with intra-dataset continual learning for facial expression recognition,” Neural Networks, vol. 158, pp. 228–238, 2023.
  9. S. Jha, M. Schiemer, F. Zambonelli, and J. Ye, “Continual learning in sensor-based human activity recognition: An empirical benchmark analysis,” Information Sciences, vol. 575, pp. 1–21, 2021.
  10. Y. Tian, Y. Wang, D. Krishnan, J. Tenenbaum, and P. Isola, “Rethinking few-shot image classification: A good embedding is all you need?” in European Conference on Computer Vision.   Cham: Springer, 2020, pp. 266–282.
  11. G. Zhao, Y. Liu, and Y. Shi, “Real-time assessment of the cross-task mental workload using physiological measures during anomaly detection,” IEEE Transactions on Human-Machine Systems, vol. 48, no. 2, pp. 149–160, 2018.
  12. J. Christensen, J. Estepp, G. Wilson, and C. Russell, “The effects of day-to-day variability of physiological data on operator functional state classification,” NeuroImage, vol. 59, no. 1, pp. 57–63, 2012.
  13. D. Mitchell, “Mental workload and ARL workload modeling tools,” Army Research Lab Aberdeen Proving Ground MD, Tech. Rep., 2000.
  14. J. Heard, C. Harriott, and J. Adams, “A survey of workload assessment algorithms,” IEEE Transactions on Human-Machine Systems, vol. 48, no. 5, pp. 434–451, 2018.
  15. S. Hart and L. Staveland, “Development of NASA-TLX (task load index): Results of empirical and theoretical research,” in Advances in Psychology.   Elsevier, 1988, vol. 52, pp. 139–183.
  16. G. Wilson and C. Russell, “Real-time assessment of mental workload using psychophysiological measures and artificial neural networks,” Human Factors, vol. 45, no. 4, pp. 635–644, 2003.
  17. G. Matthews, J. De Winter, and P. Hancock, “What do subjective workload scales really measure? Operational and representational solutions to divergence of workload measures,” Theoretical Issues in Ergonomics Science, vol. 21, no. 4, pp. 369–396, 2020.
  18. T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, and P. W. Woźniak, “A survey on measuring cognitive workload in human-computer interaction,” ACM Computing Surveys, 2023.
  19. J. Bhagat Smith, P. Baskaran, and J. Adams, “Decomposed physical workload estimation for human-robot teams,” in IEEE International Conference on Human-Machine Systems.   IEEE, 2022, pp. 1–6.
  20. J. Fortune, J. Heard, and J. Adams, “Real-time speech workload estimation for intelligent human-machine systems,” in Human Factors and Ergonomics Society Annual Meeting, vol. 64, no. 1, 2020, pp. 334–338.
  21. K. Guan, Z. Zhang, X. Chai, Z. Tian, T. Liu, and H. Niu, “EEG based dynamic functional connectivity analysis in mental workload tasks with different types of information,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 632–642, 2022.
  22. Y. Guo, D. Freer, F. Deligianni, and G. Yang, “Eye-tracking for performance evaluation and workload estimation in space telerobotic training,” IEEE Transactions on Human-Machine Systems, vol. 52, no. 1, pp. 1–11, 2021.
  23. Z. Cao, Z. Yin, and J. Zhang, “Recognition of cognitive load with a stacking network ensemble of denoising autoencoders and abstracted neurophysiological features,” Cognitive Neurodynamics, vol. 15, no. 3, pp. 425–437, 2021.
  24. F. Dell’Agnola, U. Pale, R. Marino, A. Arza, and D. Atienza, “MBioTracker: Multimodal self-aware bio-monitoring wearable system for online workload detection,” IEEE Transactions on Biomedical Circuits and Systems, vol. 15, no. 5, pp. 994–1007, 2021.
  25. M. Hogervorst, A. Brouwer, and J. Van Erp, “Combining and comparing EEG, peripheral physiology and eye-related measures for the assessment of mental workload,” Frontiers in Neuroscience, vol. 8, p. 322, 2014.
  26. Y. Xie, Y. Murphey, and D. Kochhar, “Personalized driver workload estimation using deep neural network learning from physiological and vehicle signals,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 3, pp. 439–448, 2019.
  27. Z. Yin, M. Zhao, W. Zhang, Y. Wang, Y. Wang, and J. Zhang, “Physiological-signal-based mental workload estimation via transfer dynamical autoencoders in a deep learning framework,” Neurocomputing, vol. 347, pp. 212–229, 2019.
  28. M. Islam, S. Barua, M. Ahmed, S. Begum, P. Aricò, G. Borghini, and G. Di Flumeri, “A novel mutual information based feature set for drivers’ mental workload evaluation using machine learning,” Brain Sciences, vol. 10, no. 8, p. 551, 2020.
  29. N. Momeni, F. Dell’Agnola, A. Arza, and D. Atienza, “Real-time cognitive workload monitoring based on machine learning using physiological signals in rescue missions,” in International Conference of the IEEE Engineering in Medicine and Biology Society.   IEEE, 2019, pp. 3779–3785.
  30. K. Moustafa, S. Luz, and L. Longo, “Assessment of mental workload: a comparison of machine learning methods and subjective assessment techniques,” in Human Mental Workload: Models and Applications.   Springer, 2017, pp. 30–50.
  31. J. Zhang, J. Li, and R. Wang, “Instantaneous mental workload assessment using time–frequency analysis and semi-supervised learning,” Cognitive Neurodynamics, vol. 14, no. 5, p. 619, 2020.
  32. M. Caywood, D. Roberts, J. Colombe, H. Greenwald, and M. Weiland, “Gaussian process regression for predictive but interpretable machine learning models: An example of predicting mental workload across tasks,” Frontiers in Human Neuroscience, vol. 10, p. 647, 2017.
  33. Y. Ding, Y. Cao, V. Duffy, Y. Wang, and X. Zhang, “Measurement and identification of mental workload during simulated computer tasks with multimodal methods and machine learning,” Ergonomics, vol. 63, no. 7, pp. 896–908, 2020.
  34. H. Ved and C. Yildirim, “Detecting mental workload in virtual reality using EEG spectral data: A deep learning approach,” in IEEE International Conference on Artificial Intelligence and Virtual Reality.   IEEE, 2021, pp. 173–178.
  35. I. Albuquerque, J. Monteiro, O. Rosanne, A. Tiwari, J. Gagnon, and T. Falk, “Cross-subject statistical shift estimation for generalized electroencephalography-based mental workload assessment,” in IEEE International Conference on Systems, Man and Cybernetics.   IEEE, 2019, pp. 3647–3653.
  36. R. Hefron, B. Borghetti, J. Christensen, and C. Kabban, “Deep long short-term memory structures model temporal dependencies improving cognitive workload estimation,” Pattern Recognition Letters, vol. 94, pp. 96–104, 2017.
  37. D. Novak, B. Beyeler, X. Omlin, and R. Riener, “Workload estimation in physical human–robot interaction using physiological measurements,” Interacting with Computers, vol. 27, no. 6, pp. 616–629, 2015.
  38. T. Appel, P. Gerjets, S. Hoffman, K. Moeller, M. Ninaus, C. Scharinger, N. Sevcenko, F. Wortha, and E. Kasneci, “Cross-task and cross-participant classification of cognitive load in an emergency simulation game,” IEEE Transactions on Affective Computing, vol. 14, no. 2, pp. 1558–1571, 2023.
  39. C. Baldwin and B. Penaranda, “Adaptive training using an artificial neural network and EEG metrics for within-and cross-task workload classification,” NeuroImage, vol. 59, no. 1, pp. 48–56, 2012.
  40. P. Besson, E. Dousset, C. Bourdin, L. Bringoux, T. Marqueste, D. Mestre, and J. Vercher, “Bayesian network classifiers inferring workload from physiological features: Compared performance,” in IEEE Intelligent Vehicles Symposium.   IEEE, 2012, pp. 282–287.
  41. M. Boring, K. Ridgeway, M. Shvartsman, and T. Jonker, “Continuous decoding of cognitive load from electroencephalography reveals task-general and task-specific correlates,” Journal of Neural Engineering, vol. 17, no. 5, p. 056016, 2020.
  42. C. Walter, S. Schmidt, W. Rosenstiel, P. Gerjets, and M. Bogdan, “Using cross-task classification for classifying workload levels in complex learning tasks,” in IEEE Humaine Association Conference on Affective Computing and Intelligent Interaction.   IEEE, 2013, pp. 876–881.
  43. G. Dimitrakopoulos, I. Kakkos, Z. Dai, J. Lim, J. deSouza, A. Bezerianos, and Y. Sun, “Task-independent mental workload classification based upon common multiband EEG cortical connectivity,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 11, pp. 1940–1949, 2017.
  44. I. Kakkos, G. Dimitrakopoulos, Y. Sun, J. Yuan, G. Matsopoulos, A. Bezerianos, and Y. Sun, “EEG fingerprints of task-independent mental workload discrimination,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 10, pp. 3824–3833, 2021.
  45. P. Zhang, X. Wang, W. Zhang, and J. Chen, “Learning spatial–spectral–temporal EEG features with recurrent 3d convolutional neural networks for cross-task mental workload assessment,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 1, pp. 31–42, 2018.
  46. Y. Ke, H. Qi, L. Zhang, S. Chen, X. Jiao, P. Zhou, X. Zhao, B. Wan, and D. Ming, “Towards an effective cross-task mental workload recognition model using electroencephalography based on feature selection and support vector machine regression,” International Journal of Psychophysiology, vol. 98, no. 2, pp. 157–166, 2015.
  47. Y. Zhou, Z. Xu, Y. Niu, P. Wang, X. Wen, X. Wu, and D. Zhang, “Cross-task cognitive workload recognition based on EEG and domain adaptation,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 50–60, 2022.
  48. T. Taori, S. Gupta, S. Bhagat, S. Gajre, and R. Manthalkar, “Cross-task cognitive load classification with identity mapping-based distributed cnn and attention-based rnn using Gabor decomposed data images,” IETE Journal of Research, pp. 1–17, 2022.
  49. Z. Ji, J. Tang, Q. Wang, X. Xie, J. Liu, and Z. Yin, “Cross-task cognitive workload recognition using a dynamic residual network with attention mechanism based on neurophysiological signals,” Computer Methods and Programs in Biomedicine, vol. 230, p. 107352, 2023.
  50. K. Guan, Z. Zhang, T. Liu, and H. Niu, “Cross-task mental workload recognition based on eeg tensor representation and transfer learning,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023.
  51. D. Wen, Z. Pang, X. Wan, J. Li, X. Dong, and Y. Zhou, “Cross-task-oriented eeg signal analysis methods: Our opinion,” Frontiers in Neuroscience, vol. 17, p. 1153060, 2023.
  52. T. Sun, M. Segu, J. Postels, Y. Wang, L. Van Gool, B. Schiele, F. Tombari, and F. Yu, “SHIFT: A synthetic driving dataset for continuous multi-task domain adaptation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 371–21 382.
  53. S. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2009.
  54. S. Sun, H. Shi, and Y. Wu, “A survey of multi-source domain adaptation,” Information Fusion, vol. 24, pp. 84–92, 2015.
  55. J. Baxter, “A model of inductive bias learning,” Journal of Artificial Intelligence Research, vol. 12, pp. 149–198, 2000.
  56. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.
  57. L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
  58. G. Wilson and D. Cook, “A survey of unsupervised deep domain adaptation,” IEEE Transactions on Intelligent Systems and Technology, vol. 11, no. 5, pp. 1–46, 2020.
  59. Y. Wang, Q. Yao, J. T. Kwok, and L. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM Computing Surveys, vol. 53, no. 3, pp. 1–34, 2020.
  60. F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol. 109, no. 1, pp. 43–76, 2020.
  61. O. Vinyals, C. Blundell, T. Lillicrap, , K. Kavukcuoglu, and D. Wierstra, “Matching networks for one shot learning,” Advances in Neural Information Processing Systems, vol. 29, pp. 3630–3638, 2016.
  62. A. Faridee, A. Chakma, A. Misra, and N. Roy, “Strangan: Adversarially-learnt spatial transformer for scalable human activity recognition,” Smart Health, vol. 23, p. 100226, 2022.
  63. X. Li, Y. He, J. Zhang, and X. Jing, “Supervised domain adaptation for few-shot radar-based human activity recognition,” IEEE Sensors Journal, vol. 21, no. 22, pp. 25 880–25 890, 2021.
  64. T. Liu, Y. Yang, W. Fan, and C. Wu, “Few-shot learning for cardiac arrhythmia detection based on electrocardiogram data from wearable devices,” Digital Signal Processing, vol. 116, p. 103094, 2021.
  65. E. Soleimani and E. Nazerfard, “Cross-subject transfer learning in human activity recognition systems using generative adversarial networks,” Neurocomputing, vol. 426, pp. 26–34, 2021.
  66. P. Singhal, R. Walambe, S. Ramanna, and K. Kotecha, “Domain adaptation: Challenges, methods, datasets, and applications,” IEEE Access, 2023.
  67. M. Wang and W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, vol. 312, pp. 135–153, 2018.
  68. A. Alajaji, W. Gerych, L. Buquicchio, K. Chandrasekaran, H. Mansoor, E. Agu, and E. Rundensteiner, “Domain adaptation methods for lab-to-field human context recognition,” Sensors, vol. 23, no. 6, p. 3081, 2023.
  69. S. An, A. Medda, M. Sawka, C. Hutto, M. Millard-Stafford, S. Appling, K. Richardson, and O. Inan, “Adaptnet: Human activity recognition via bilateral domain adaptation using semi-supervised deep translation networks,” IEEE Sensors Journal, vol. 21, no. 18, pp. 20 398–20 411, 2021.
  70. U. Zakia and C. Menon, “Force myography-based human robot interactions via deep domain adaptation and generalization,” Sensors, vol. 22, no. 1, p. 211, 2021.
  71. Y. Li, Q. Meng, Y. Wang, T. Yang, and H. Hou, “Mass: A multisource domain adaptation network for cross-subject touch gesture recognition,” IEEE Transactions on Industrial Informatics, vol. 19, no. 3, pp. 3099–3108, 2022.
  72. Y. Zhou, P. Wang, P. Gong, F. Wei, X. Wen, X. Wu, and D. Zhang, “Cross-subject cognitive workload recognition based on eeg and deep domain adaptation,” IEEE Transactions on Instrumentation and Measurement, 2023.
  73. I. Albuquerque, J. Monteiro, O. Rosanne, and T. Falk, “Estimating distribution shifts for predicting cross-subject generalization in electroencephalography-based mental workload assessment,” Frontiers in Artificial Intelligence, vol. 5, p. 992732, 2022.
  74. Z. Li, E. Zhu, M. Jin, C. Fan, H. He, T. Cai, and J. Li, “Dynamic domain adaptation for class-aware cross-subject and cross-session eeg emotion recognition,” IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 12, pp. 5964–5973, 2022.
  75. S. Latif, R. Rana, S. Khalifa, R. Jurdak, and B. Schuller, “Self supervised adversarial domain adaptation for cross-corpus and cross-language speech emotion recognition,” IEEE Transactions on Affective Computing, 2022.
  76. W. Guo, G. Xu, and Y. Wang, “Multi-source domain adaptation with spatio-temporal feature extractor for eeg emotion recognition,” Biomedical Signal Processing and Control, vol. 84, p. 104998, 2023.
  77. Z. He, Y. Zhong, and J. Pan, “An adversarial discriminative temporal convolutional network for eeg-based cross-domain emotion recognition,” Computers in biology and medicine, vol. 141, p. 105048, 2022.
  78. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
  79. Y. Balaji, R. Chellappa, and S. Feizi, “Robust optimal transport with applications in generative modeling and domain adaptation,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 934–12 944, 2020.
  80. M. Liu and O. Tuzel, “Coupled generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 29, 2016.
  81. R. Li, Q. Jiao, W. Cao, H. Wong, and S. Wu, “Model adaptation: Unsupervised domain adaptation without source data,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9641–9650.
  82. B. Gholami, P. Sahu, O. Rudovic, K. Bousmalis, and V. Pavlovic, “Unsupervised multi-target domain adaptation: An information theoretic approach,” IEEE Transactions on Image Processing, vol. 29, pp. 3993–4002, 2020.
  83. X. Yue, Z. Zheng, S. Zhang, Y. Gao, T. Darrell, K. Keutzer, and A. Vincentelli, “Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13 834–13 844.
  84. M. Huh, P. Agrawal, and A. Efros, “What makes imagenet good for transfer learning?” arXiv preprint arXiv:1608.08614, 2016.
  85. M. Morid, A. Borjali, and G. Del Fiol, “A scoping review of transfer learning research on medical image analysis using imagenet,” Computers in biology and medicine, vol. 128, p. 104115, 2021.
  86. R. Ding, X. Li, L. Nie, J. Li, X. Si, D. Chu, G. Liu, and D. Zhan, “Empirical study and improvement on deep transfer learning for human activity recognition,” Sensors, vol. 19, no. 1, p. 57, 2018.
  87. S. Li, P. Zheng, J. Fan, and L. Wang, “Toward proactive human–robot collaborative assembly: A multimodal transfer-learning-enabled action prediction approach,” IEEE Transactions on Industrial Electronics, vol. 69, no. 8, pp. 8579–8588, 2021.
  88. J. Li, S. Qiu, Y. Shen, C. Liu, and H. He, “Multisource transfer learning for cross-subject eeg emotion recognition,” IEEE Transactions on Cybernetics, vol. 50, no. 7, pp. 3281–3293, 2019.
  89. J. Quan, Y. Li, L. Wang, R. He, S. Yang, and L. Guo, “Eeg-based cross-subject emotion recognition using multi-source domain transfer learning,” Biomedical Signal Processing and Control, vol. 84, p. 104741, 2023.
  90. Y. Ma, W. Zhao, M. Meng, Q. Zhang, Q. She, and J. Zhang, “Cross-subject emotion recognition based on domain similarity of eeg signal transfer learning,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 936–943, 2023.
  91. D. Nguyen, D. T. Nguyen, S. Sridharan, S. Denman, T. T. Nguyen, D. Dean, and C. Fookes, “Meta-transfer learning for emotion recognition,” Neural Computing and Applications, pp. 1–15, 2023.
  92. X. Chen, K. Kim, and H. Youn, “Feature matching and instance reweighting with transfer learning for human activity recognition using smartphone,” The Journal of Supercomputing, vol. 78, no. 1, pp. 712–739, 2022.
  93. B. Wang, M. Qiu, X. Wang, Y. Li, Y. Gong, X. Zeng, J. Huang, B. Zheng, D. Cai, and J. Zhou, “A minimax game for instance based selective transfer learning,” in ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 34–43.
  94. T. Wang, J. Huan, and M. Zhu, “Instance-based deep transfer learning,” in IEEE/CVF Winter Conference on Applications of Computer Vision.   IEEE, 2019, pp. 367–375.
  95. H. Ren, W. Liu, M. Shan, and X. Wang, “A new wind turbine health condition monitoring method based on vmd-mpe and feature-based transfer learning,” Measurement, vol. 148, p. 106906, 2019.
  96. J. Stüber, M. Kopicki, and C. Zito, “Feature-based transfer learning for robotic push manipulation,” in IEEE International Conference on Robotics and Automation.   IEEE, 2018, pp. 5643–5650.
  97. B. Wu, C. Yang, and J. Zhong, “Research on transfer learning of vision-based gesture recognition,” International Journal of Automation and Computing, vol. 18, no. 3, pp. 422–431, 2021.
  98. Z. Fu, X. He, E. Wang, J. Huo, J. Huang, and D. Wu, “Personalized human activity recognition based on integrated wearable sensor and transfer learning,” Sensors, vol. 21, no. 3, p. 885, 2021.
  99. J. Link, T. Perst, M. Stoeve, and B. Eskofier, “Wearable sensors for activity recognition in ultimate frisbee using convolutional neural networks and transfer learning,” Sensors, vol. 22, no. 7, p. 2560, 2022.
  100. K. Weiss, T. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big data, vol. 3, no. 1, pp. 1–40, 2016.
  101. T. Lee, J. Zhao, A. Sawhney, S. Girdhar, and O. Kroemer, “Causal reasoning in simulation for structure and transfer learning of robot manipulation policies,” in IEEE International Conference on Robotics and Automation.   IEEE, 2021, pp. 4776–4782.
  102. D. Cook, k. Feuz, and N. Krishnan, “Transfer learning for activity recognition: A survey,” Knowledge and Information Systems, vol. 36, no. 3, pp. 537–556, 2013.
  103. S. An, G. Bhat, S. Gumussoy, and U. Ogras, “Transfer learning for human activity recognition using representational analysis of neural networks,” ACM Transactions on Computing for Healthcare, vol. 4, no. 1, pp. 1–21, 2023.
  104. A. Ray, M. H. Kolekar, R. Balasubramanian, and A. Hafiane, “Transfer learning enhanced vision-based human activity recognition: a decade-long analysis,” International Journal of Information Management Data Insights, vol. 3, no. 1, p. 100142, 2023.
  105. O. Pavliuk, M. Mishchuk, and C. Strauss, “Transfer learning approach for human activity recognition based on continuous wavelet transform,” Algorithms, vol. 16, no. 2, p. 77, 2023.
  106. Y. Sun, X. Wang, Z. Liu, J. Miller, A. Efros, and M. Hardt, “Test-time training with self-supervision for generalization under distribution shifts,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2020, pp. 9229–9248.
  107. M. Plananamente, C. Plizzari, and B. Caputo, “Test-time adaptation for egocentric action recognition,” in International Conference on Image Analysis and Processing.   Springer, 2022, pp. 206–218.
  108. J. He, Z. Erickson, D. Brown, A. Raghunathan, and A. Dragan, “Learning representations that enable generalization in assistive tasks,” in Conference on Robot Learning.   PMLR, 2023, pp. 2105–2114.
  109. F. Azimi, S. Palacio, F. Raue, J. Hees, L. Bertinetto, and A. Dengel, “Self-supervised test-time adaptation on video data,” in IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 3439–3448.
  110. L. Chen, Y. Zhang, Y. Song, Y. Shan, and L. Liu, “Improved test-time adaptation for domain generalization,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24 172–24 182.
  111. M. Segu, B. Schiele, and F. Yu, “Darth: Holistic test-time adaptation for multiple object tracking,” in IEEE/CVF International Conference on Computer Vision, 2023, pp. 9717–9727.
  112. Y. Liu, P. Kothari, B. van Delft, B. Bellot-Gurlet, T. Mordan, and A. Alahi, “Ttt++: When does self-supervised test-time training fail or thrive?” Advances in Neural Information Processing Systems, vol. 34, pp. 21 808–21 820, 2021.
  113. M. Zhang, S. Levine, and C. Finn, “Memo: Test time robustness via adaptation and augmentation,” Advances in Neural Information Processing Systems, vol. 35, pp. 38 629–38 642, 2022.
  114. J. Zhang, L. Qi, Y. Shi, and Y. Gao, “Domainadaptor: A novel approach to test-time adaptation,” in IEEE/CVF International Conference on Computer Vision, 2023, pp. 18 971–18 981.
  115. C. Shannon, “A mathematical theory of communication,” The Bell Systems Technical Journal, vol. 27, no. 3, pp. 379–423, 1948.
  116. D. Wang, E. Shelhamer, S. Liu, B. Olshausen, and T. Darrell, “Tent: Fully test-time adaptation by entropy minimization,” in International Conference on Learning Representations, 2020.
  117. Z. Chen and B. Liu, “Lifelong machine learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 12, no. 3, pp. 1–207, 2018.
  118. J. S. Smith, J. Tian, S. Halbe, Y.-C. Hsu, and Z. Kira, “A closer look at rehearsal-free continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2409–2419.
  119. Z. Mai, R. Li, J. Jeong, D. Quispe, H. Kim, and S. Sanner, “Online continual learning in image classification: An empirical survey,” Neurocomputing, vol. 469, pp. 28–51, 2022.
  120. H. Chen, Y. Jia, J. Ge, and B. Gu, “Incremental learning algorithm for large-scale semi-supervised ordinal regression,” Neural Networks, vol. 149, pp. 124–136, 2022.
  121. Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, and Y. Fu, “Large scale incremental learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 374–382.
  122. F. Zhu, X. Zhang, C. Wang, F. Yin, and C. Liu, “Prototype augmentation and self-supervision for incremental learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5871–5880.
  123. Z. Cai, O. Sener, and V. Koltun, “Online continual learning with natural distribution shifts: An empirical study with visual data,” in IEEE/CVF International Conference on Computer Vision, 2021, pp. 8281–8290.
  124. M. Derakhshani, X. Zhen, L. Shao, and C. Snoek, “Kernel continual learning,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2021, pp. 2621–2631.
  125. V. Lomonaco, L. Pellegrini, A. Cossu, A. Carta, G. Graffieti, T. Hayes, M. De Lange, M. Masana, J. Pomponi, G. Van de Ven et al., “Avalanche: an end-to-end library for continual learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3600–3610.
  126. J. Serra, D. Suris, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2018, pp. 4548–4557.
  127. M. Hasan and A. Roy-Chowdhury, “A continuous learning framework for activity recognition using deep hybrid feature models,” IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 1909–1922, 2015.
  128. J. Ye, S. Dobson, and F. Zambonelli, “Lifelong learning in sensor-based human activity recognition,” IEEE Pervasive Computing, vol. 18, no. 3, pp. 49–58, 2019.
  129. S. Ashry, T. Ogawa, and W. Gomaa, “Charm-deep: Continuous human activity recognition model based on deep neural network using imu sensors of smartwatch,” IEEE Sensors Journal, vol. 20, no. 15, pp. 8757–8770, 2020.
  130. C. Leite and Y. Xiao, “Resource-efficient continual learning for sensor-based human activity recognition,” IEEE Transactions on Embedded Computing Systems, vol. 21, no. 6, pp. 1–25, 2022.
  131. N. Churamani, M. Axelsson, A. Caldır, and H. Gunes, “Continual learning for affective robotics: A proof of concept for wellbeing,” in IEEE International Conference on Affective Computing and Intelligent Interaction, 2022, pp. 1–8.
  132. T. Lesort, V. Lomonaco, A. Stoian, D. Maltoni, D. Filliat, and N. Díaz-Rodríguez, “Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges,” Information Fusion, vol. 58, pp. 52–68, 2020.
  133. S. Spaulding, J. Shen, H. Park, and C. Breazeal, “Lifelong personalization via gaussian process modeling for long-term HRI,” Frontiers in Robotics and AI, vol. 8, p. 683066, 2021.
  134. M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, “A continual learning survey: Defying forgetting in classification tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3366–3385, 2021.
  135. A. Chaudhry, P. Dokania, T. Ajanthan, and P. Torr, “Riemannian walk for incremental learning: Understanding forgetting and intransigence,” in European Conference on Computer Vision, 2018, pp. 532–547.
  136. M. Mundt, Y. Hong, I. Pliushch, and V. Ramesh, “A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning,” Neural Networks, vol. 160, pp. 306–336, 2023.
  137. S.-I. Ao and H. Fayek, “Continual deep learning for time series modeling,” Sensors, vol. 23, no. 16, p. 7167, 2023.
  138. C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2017, pp. 1126–1135.
  139. D. Li, Y. Yang, Y. Song, and T. Hospedales, “Learning to generalize: Meta-learning for domain generalization,” in AAAI Conference on Artificial Intelligence, vol. 32, no. 1.   AAAI, 2018.
  140. K. Muandet, D. Balduzzi, and B. Schölkopf, “Domain generalization via invariant feature representation,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2013, pp. 10–18.
  141. S. Suh, V. F. Rey, and P. Lukowicz, “Tasked: Transformer-based adversarial learning for human activity recognition using wearable sensors via self-knowledge distillation,” Knowledge-Based Systems, vol. 260, p. 110143, 2023.
  142. J. Li, C. Shen, L. Kong, D. Wang, M. Xia, and Z. Zhu, “A new adversarial domain generalization network based on class boundary feature detection for bearing fault diagnosis,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–9, 2022.
  143. X. Gu, J. Han, G.-Z. Yang, and B. Lo, “Generalizable movement intention recognition with multiple heterogeneous eeg datasets,” in IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 9858–9864.
  144. M. Ilse, J. Tomczak, C. Louizos, and M. Welling, “Diva: Domain invariant variational autoencoders,” in Conference on Medical Imaging with Deep Learning, ser. Proceedings of Machine Learning Research, 2020, pp. 322–348.
  145. P. Li, D. Li, W. Li, S. Gong, Y. Fu, and T. Hospedales, “A simple feature augmentation for domain generalization,” in IEEE/CVF International Conference on Computer Vision, 2021, pp. 8886–8895.
  146. X. Yue, Y. Zhang, S. Zhao, A. Sangiovanni-Vincentelli, K. Keutzer, and B. Gong, “Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data,” in IEEE/CVF International Conference on Computer Vision, 2019, pp. 2100–2110.
  147. P. Khandelwal and P. Yushkevich, “Domain generalizer: A few-shot meta learning framework for domain generalization in medical imaging,” in Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning.   Springer, 2020, pp. 73–84.
  148. A. Sicilia, X. Zhao, and S. J. Hwang, “Domain adversarial neural networks for domain generalization: When it works and how to improve,” Machine Learning, pp. 1–37, 2023.
  149. Q. Liu, Q. Dou, and P. Heng, “Shape-aware meta-learning for generalizing prostate MRI segmentation to unseen domains,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2020, pp. 475–485.
  150. S. Hu, K. Zhang, Z. Chen, and L. Chan, “Domain generalization via multidomain discriminant analysis,” in Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, 2020, pp. 292–302.
  151. M. Ghifary, D. Balduzzi, W. Kleijn, and M. Zhang, “Scatter component analysis: A unified framework for domain adaptation and domain generalization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 7, pp. 1414–1430, 2016.
  152. D. Kim, Y. Yoo, S. Park, J. Kim, and J. Lee, “SelfReg: Self-supervised contrastive regularization for domain generalization,” in IEEE/CVF International Conference on Computer Vision, 2021, pp. 9619–9628.
  153. Z. Wang, M. Loog, and J. van Gemert, “Respecting domain relations: Hypothesis invariance for domain generalization,” in International Conference on Pattern Recognition.   IEEE, 2021, pp. 9756–9763.
  154. H. Li, S. Pan, S. Wang, and A. Kot, “Domain generalization with adversarial feature learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 5400–5409.
  155. S. Otálora, M. Atzori, V. Andrearczyk, A. Khan, and H. Müller, “Staining invariant features for improving generalization of deep convolutional neural networks in computational pathology,” Frontiers in Bioengineering and Biotechnology, p. 198, 2019.
  156. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  157. T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta-learning in neural networks: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  158. A. Antoniou, H. Edwards, and A. Storkey, “How to train your MAML,” arXiv preprint arXiv:1810.09502, 2018.
  159. L. Bertinetto, J. Henriques, P. Torr, and A. Vedaldi, “Meta-learning with differentiable closed-form solvers,” in International Conference on Learning Representations, 2018.
  160. C. Finn, K. Xu, and S. Levine, “Probabilistic model-agnostic meta-learning,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  161. A. Nichol, J. Achiam, and J. Schulman, “On first-order meta-learning algorithms,” arXiv preprint arXiv:1803.02999, 2018.
  162. J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn, “Bayesian model-agnostic meta-learning,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  163. S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” 2017.
  164. Z. Li, F. Zhou, F. Chen, and H. Li, “Meta-sgd: Learning to learn quickly for few-shot learning,” arXiv preprint arXiv:1707.09835, 2017.
  165. K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Learning to generate novel domains for domain generalization,” in European Conference on Computer Vision.   Springer, 2020, pp. 561–578.
  166. Y. Xu, D. Yu, Y. Luo, E. Zhu, and J. Lu, “Generative adversarial domain generalization via cross-task feature attention learning for prostate segmentation,” in International Conference on Neural Information Processing.   Springer, 2021, pp. 273–284.
  167. K. Chen, D. Zhuang, and J. Chang, “Discriminative adversarial domain generalization with meta-learning based cross-domain validation,” Neurocomputing, vol. 467, pp. 418–426, 2022.
  168. J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” Advances in Neural Information Processing Systems, vol. 30, 2017.
  169. M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. Teh, D. Rezende, and S. Eslami, “Conditional neural processes,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2018, pp. 1704–1713.
  170. L. Zhou, Y. Liu, X. Bai, N. Li, X. Yu, J. Zhou, and E. R. Hancock, “Attribute subspaces for zero-shot learning,” Pattern Recognition, vol. 144, p. 109869, 2023.
  171. X. Li, X. Yang, Z. Ma, and J.-H. Xue, “Deep metric learning for few-shot image classification: A review of recent developments,” Pattern Recognition, p. 109381, 2023.
  172. S. Jha, D. Gong, X. Wang, R. Turner, and L. Yao, “The neural process family: Survey, applications and perspectives,” arXiv preprint arXiv:2209.00517, 2022.
  173. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208.
  174. C. Simon, P. Koniusz, R. Nock, and M. Harandi, “Adaptive subspaces for few-shot learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4136–4145.
  175. J. Kim, T. Oh, S. Lee, G. Pan, and I. Kweon, “Variational prototyping-encoder: One-shot learning with prototypical images,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9462–9470.
  176. E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata, “Generalized zero-and few-shot learning via aligned variational autoencoders,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8247–8255.
  177. J. Zhang, C. Zhao, B. Ni, M. Xu, and X. Yang, “Variational few-shot learning,” in IEEE/CVF International Conference on Computer Vision, 2019, pp. 1685–1694.
  178. C. Rasmussen, “Gaussian processes in machine learning,” in Summer school on machine learning.   Springer, 2003, pp. 63–71.
  179. M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. Rezende, S. Eslami, and Y. Teh, “Neural processes,” arXiv preprint arXiv:1807.01622, 2018.
  180. A. Wilson, Z. Hu, R. Salakhutdinov, and E. Xing, “Deep kernel learning,” in International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, 2016, pp. 370–378.
  181. A. Foong, W. Bruinsma, J. Gordon, Y. Dubois, J. Requeima, and R. Turner, “Meta-learning stationary stochastic process prediction with convolutional neural processes,” Advances in Neural Information Processing Systems, vol. 33, pp. 8284–8295, 2020.
  182. J. Gu, K.-C. Wang, and S. Yeung, “Generalizable neural fields as partially observed neural processes,” in IEEE/CVF International Conference on Computer Vision, 2023, pp. 5330–5339.
  183. P. Holderrieth, M. Hutchinson, and Y. Teh, “Equivariant learning of stochastic fields: Gaussian processes and steerable conditional neural processes,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2021, pp. 4297–4307.
  184. W. Bruinsma, S. Markou, J. Requeima, A. Y. K. Foong, T. Andersson, A. Vaughan, A. Buonomo, S. Hosking, and R. E. Turner, “Autoregressive conditional neural processes,” in International Conference on Learning Representations, 2023. [Online]. Available: https://openreview.net/forum?id=OAsXFPBfTBh
  185. M. Kim, K. Ryeol Go, and S. Yun, “Neural processes with stochastic attention: Paying more attention to the context dataset,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=JPkQwEdYn8
  186. T. Wang, J.and Lukasiewicz, D. Massiceti, X. Hu, V. Pavlovic, and A. Neophytou, “Np-match: When neural processes meet semi-supervised learning,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2022, pp. 22 919–22 934.
  187. Z. Ye and L. Yao, “Contrastive conditional neural processes,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9687–9696.
  188. E. Dexheimer and A. J. Davison, “Learning a depth covariance function,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 13 122–13 131.
  189. D. S. Pandey and Q. Yu, “Evidential conditional neural processes,” in AAAI Conference on Artificial Intelligence, vol. 37, no. 8, 2023, pp. 9389–9397.
  190. M. Patacchiola, J. Turner, E. Crowley, M. O’Boyle, and A. Storkey, “Bayesian meta-learning for the few-shot setting via deep kernels,” Advances in Neural Information Processing Systems, vol. 33, pp. 16 108–16 118, 2020.
  191. J. Snell and R. Zemel, “Bayesian few-shot classification with one-vs-each p’olya-gamma augmented gaussian processes,” in International Conference on Learning Representations, 2021.
  192. J. Rothfuss, V. Fortuin, M. Josifoski, and A. Krause, “PACOH: Bayes-optimal meta-learning with PAC-guarantees,” in International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, 2021, pp. 9116–9126.
  193. M. Sendera, J. Tabor, A. Nowak, A. Bedychaj, M. Patacchiola, T. Trzcinski, P. Spurek, and M. Zieba, “Non-Gaussian Gaussian processes for few-shot regression,” Advances in Neural Information Processing Systems, vol. 34, pp. 10 285–10 298, 2021.
  194. J. Rothfuss, C. Koenig, A. Rupenyan, and A. Krause, “Meta-learning priors for safe Bayesian optimization,” in Conference on Robot Learning, ser. Proceedings of Machine Learning Research, 2023, pp. 237–265.
  195. P. Wei, Y. Ke, Y.-S. Ong, and Z. Ma, “Adaptive transfer kernel learning for transfer gaussian process regression,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 6, pp. 7142–7156, 2023.
  196. M. M. Bånkestad, J. Sjölund, J. Taghia, and T. B. Schön, “Variational elliptical processes,” Transactions on Machine Learning Research, 2023. [Online]. Available: https://openreview.net/forum?id=djN3TaqbdA
  197. J. van Amersfoort, L. Smith, A. Jesson, O. Key, and Y. Gal, “On feature collapse and deep kernel learning for single forward pass uncertainty,” Advances in Neural Information Processing Systems, 2021.
  198. J. Liu, S. Padhy, H. Ren, Z. Lin, Y. Wen, G. Jerfel, Z. Nado, J. Snoek, D. Tran, and B. Lakshminarayanan, “A simple approach to improve single-model deep uncertainty via distance-awareness,” Journal of Machine Learning Research, vol. 23, pp. 42–1, 2023.
  199. V. Garcia and J. Bruna, “Few-shot learning with graph neural networks,” in International Conference on Learning Representations, 2018.
  200. S. A. Toribio, J. Bhagat Smith, , P. Baskaran, and J. A. Adams, “Uncertainty-aware visual workload estimation for human-robot teams,” in Conference on Cognitive and Computational Aspects of Situation Management, 2023, pp. 1–8.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets