PhysMLE: Generalizable and Priors-Inclusive Multi-task Remote Physiological Measurement (2405.06201v2)
Abstract: Remote photoplethysmography (rPPG) has been widely applied to measure heart rate from face videos. To increase the generalizability of the algorithms, domain generalization (DG) attracted increasing attention in rPPG. However, when rPPG is extended to simultaneously measure more vital signs (e.g., respiration and blood oxygen saturation), achieving generalizability brings new challenges. Although partial features shared among different physiological signals can benefit multi-task learning, the sparse and imbalanced target label space brings the seesaw effect over task-specific feature learning. To resolve this problem, we designed an end-to-end Mixture of Low-rank Experts for multi-task remote Physiological measurement (PhysMLE), which is based on multiple low-rank experts with a novel router mechanism, thereby enabling the model to adeptly handle both specifications and correlations within tasks. Additionally, we introduced prior knowledge from physiology among tasks to overcome the imbalance of label space under real-world multi-task physiological measurement. For fair and comprehensive evaluations, this paper proposed a large-scale multi-task generalization benchmark, named Multi-Source Synsemantic Domain Generalization (MSSDG) protocol. Extensive experiments with MSSDG and intra-dataset have shown the effectiveness and efficiency of PhysMLE. In addition, a new dataset was collected and made publicly available to meet the needs of the MSSDG.
- C. Gouveia, C. Loss, P. Pinho, J. Vieira, and D. Albuquerque, “Low-profile textile antenna for bioradar integration into car seat upholstery: Wireless acquisition of vital signs while on the road.” IEEE Antennas and Propagation Magazine, 2023.
- Y.-C. Wu, L.-W. Chiu, C.-C. Lai, B.-F. Wu, and S. S. Lin, “Recognizing, fast and slow: Complex emotion recognition with facial expression detection and remote physiological measurement,” IEEE Transactions on Affective Computing, 2023.
- Y. Akamatsu, T. Umematsu, and H. Imaoka, “Calibrationphys: Self-supervised video-based heart and respiratory rate measurements by calibrating between multiple cameras,” IEEE Journal of Biomedical and Health Informatics, 2023.
- Z. Yu, X. Li, and G. Zhao, “Facial-video-based physiological signal measurement: Recent advances and affective applications,” IEEE Signal Processing Magazine, vol. 38, no. 6, pp. 50–58, 2021.
- X. Niu, Z. Yu, H. Han, X. Li, S. Shan, and G. Zhao, “Video-based remote physiological measurement via cross-verified feature disentangling,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. Springer, 2020, pp. 295–310.
- M.-A. Fiedler, M. Rapczyński, and A. Al-Hamadi, “Fusion-based approach for respiratory rate recognition from facial video images,” IEEE Access, vol. 8, pp. 130 036–130 047, 2020.
- J. Pourbemany, A. Essa, and Y. Zhu, “Real-time video-based heart and respiration rate monitoring,” in NAECON 2021-IEEE National Aerospace and Electronics Conference. IEEE, 2021, pp. 332–336.
- A. I. Siam, N. A. El-Bahnasawy, G. M. El Banby, A. Abou Elazm, and F. E. Abd El-Samie, “Efficient video-based breathing pattern and respiration rate monitoring for remote health monitoring,” JOSA A, vol. 37, no. 11, pp. C118–C124, 2020.
- B.-J. Wu, B.-F. Wu, Y.-C. Dong, H.-C. Lin, and P.-H. Li, “Peripheral oxygen saturation measurement using an rgb camera,” IEEE Sensors Journal, 2023.
- L. Tarassenko, M. Villarroel, A. Guazzi, J. Jorge, D. Clifton, and C. Pugh, “Non-contact video-based vital sign monitoring using ambient light and auto-regressive models,” Physiological measurement, vol. 35, no. 5, p. 807, 2014.
- N. H. Kim, S.-G. Yu, S.-E. Kim, and E. C. Lee, “Non-contact oxygen saturation measurement using ycgcr color space with an rgb camera,” Sensors, vol. 21, no. 18, p. 6120, 2021.
- H. Shao, L. Luo, J. Qian, S. Chen, C. Hu, and J. Yang, “Tranphys: Spatiotemporal masked transformer steered remote photoplethysmography estimation,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
- A. Das, H. Lu, H. Han, A. Dantcheva, S. Shan, and X. Chen, “Bvpnet: Video-to-bvp signal prediction for remote heart rate estimation,” in 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021). IEEE, 2021, pp. 01–08.
- J. Du, S.-Q. Liu, B. Zhang, and P. C. Yuen, “Weakly supervised rppg estimation for respiratory rate estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2391–2397.
- C. Orphanidou, “A review of big data applications of physiological signal data,” Biophysical reviews, vol. 11, no. 1, pp. 83–87, 2019.
- C. Orphanidou, T. Bonnici, P. H. Charlton, D. A. Clifton, D. Vallance, and L. Tarassenko, “Signal-quality indices for the electrocardiogram and photoplethysmogram: Derivation and applications to wireless monitoring,” IEEE Journal of Biomedical and Health Informatics, vol. 19, pp. 832–838, 2015. [Online]. Available: https://api.semanticscholar.org/CorpusID:2709335
- J. Chang, C. Zhang, Y. Hui, D. Leng, Y. Niu, Y. Song, and K. Gai, “Pepnet: Parameter and embedding personalized network for infusing with personalized prior information,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023, pp. 3795–3804.
- E. Peper, R. Harvey, I.-M. Lin, H. Tylova, and D. Moss, “Is there more to blood volume pulse than heart rate variability, respiratory sinus arrhythmia, and cardiorespiratory synchrony?” Biofeedback, vol. 35, no. 2, 2007.
- X. Liu, J. Fromm, S. Patel, and D. McDuff, “Multi-task temporal shift attention networks for on-device contactless vitals measurement,” Advances in Neural Information Processing Systems, vol. 33, pp. 19 400–19 411, 2020.
- G. Narayanswamy, Y. Liu, Y. Yang, C. Ma, X. Liu, D. McDuff, and S. Patel, “Bigsmall: Efficient multi-task learning for disparate spatial and temporal physiological measurements,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7914–7924.
- H. Lu, Z. Yu, X. Niu, and Y.-C. Chen, “Neuron structure modeling for generalizable remote physiological measurement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18 589–18 599.
- J. Wang, H. Lu, A. Wang, Y. Chen, and D. He, “Hierarchical style-aware domain generalization for remote physiological measurement,” IEEE Journal of Biomedical and Health Informatics, 2023.
- W. Sun, X. Zhang, H. Lu, Y. Chen, Y. Ge, X. Huang, J. Yuan, and Y. Chen, “Resolve domain conflicts for generalizable remote physiological measurement,” in Proceedings of the 31st ACM International Conference on Multimedia, ser. MM ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 8214–8224. [Online]. Available: https://doi.org/10.1145/3581783.3612265
- N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” in International Conference on Learning Representations, 2016.
- X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang, “Gpt understands, too,” AI Open, 2023.
- E. J. Hu, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen et al., “Lora: Low-rank adaptation of large language models,” in International Conference on Learning Representations, 2021.
- K. Zhang, B. Schölkopf, K. Muandet, and Z. Wang, “Domain adaptation under target and conditional shift,” in International Conference on Machine Learning, 2013.
- Z. Sun and X. Li, “Contrast-phys+: Unsupervised and weakly-supervised video-based remote physiological measurement via spatiotemporal contrast,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
- G. G. Berntson, J. Thomas Bigger Jr, D. L. Eckberg, P. Grossman, P. G. Kaufmann, M. Malik, H. N. Nagaraja, S. W. Porges, J. P. Saul, P. H. Stone et al., “Heart rate variability: origins, methods, and interpretive caveats,” Psychophysiology, vol. 34, no. 6, pp. 623–648, 1997.
- M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal database for affect recognition and implicit tagging,” IEEE transactions on affective computing, vol. 3, no. 1, pp. 42–55, 2011.
- R. Stricker, S. Müller, and H.-M. Gross, “Non-contact video-based pulse rate measurement on a mobile service robot,” in The 23rd IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2014, pp. 1056–1062.
- G. Heusch, A. Anjos, and S. Marcel, “A reproducible study on remote heart rate measurement,” arXiv preprint arXiv:1709.00962, 2017.
- S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe, “Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2396–2404.
- X. Li, I. Alikhani, J. Shi, T. Seppanen, J. Junttila, K. Majamaa-Voltti, M. Tulppo, and G. Zhao, “The obf database: A large face video database for remote physiological signal measurement and atrial fibrillation detection,” in 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 2018, pp. 242–249.
- S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, and J. Dubois, “Unsupervised skin tissue segmentation for remote photoplethysmography,” Pattern Recognition Letters, vol. 124, pp. 82–90, 2019.
- L. Xi, W. Chen, C. Zhao, X. Wu, and J. Wang, “Image enhancement for remote photoplethysmography in a low-light environment,” in 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020). IEEE, 2020, pp. 1–7.
- X. Niu, S. Shan, H. Han, and X. Chen, “Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation,” IEEE Transactions on Image Processing, vol. 29, pp. 2409–2423, 2019.
- A. Revanur, Z. Li, U. A. Ciftci, L. Yin, and L. A. Jeni, “The first vision for vitals (v4v) challenge for non-contact video-based physiological estimation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2760–2767.
- J. Tang, K. Chen, Y. Wang, Y. Shi, S. Patel, D. McDuff, and X. Liu, “Mmpd: multi-domain mobile video physiology dataset,” in 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2023, pp. 1–5.
- W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light.” Optics express, vol. 16, no. 26, pp. 21 434–21 445, 2008.
- G. De Haan and V. Jeanne, “Robust pulse rate from chrominance-based rppg,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 10, pp. 2878–2886, 2013.
- W. Wang, A. C. Den Brinker, S. Stuijk, and G. De Haan, “Algorithmic principles of remote ppg,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 7, pp. 1479–1491, 2016.
- W. Chen and D. McDuff, “Deepphys: Video-based physiological measurement using convolutional attention networks,” in Proceedings of the european conference on computer vision (ECCV), 2018, pp. 349–365.
- Z. Yu, W. Peng, X. Li, X. Hong, and G. Zhao, “Remote heart rate measurement from highly compressed facial videos: an end-to-end deep learning solution with video enhancement,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 151–160.
- Z. Yu, Y. Shen, J. Shi, H. Zhao, Y. Cui, J. Zhang, P. Torr, and G. Zhao, “Physformer++: Facial video-based physiological measurement with slowfast temporal difference transformer,” International Journal of Computer Vision, vol. 131, no. 6, pp. 1307–1330, 2023.
- J. Cheng, R. Liu, J. Li, R. Song, Y. Liu, and X. Chen, “Motion-robust respiratory rate estimation from camera videos via fusing pixel movement and pixel intensity information,” IEEE Transactions on Instrumentation and Measurement, 2023.
- R. Janssen, W. Wang, A. Moço, and G. De Haan, “Video-based respiration monitoring with automatic region of interest detection,” Physiological measurement, vol. 37, no. 1, p. 100, 2015.
- F. Schrumpf, C. Mönch, G. Bausch, and M. Fuchs, “Exploiting weak head movements for camera-based respiration detection,” in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2019, pp. 6059–6062.
- M. Ghodratigohar, H. Ghanadian, and H. Al Osman, “A remote respiration rate measurement method for non-stationary subjects using ceemdan and machine learning,” IEEE Sensors Journal, vol. 20, no. 3, pp. 1400–1410, 2019.
- X. Niu, Z. Yu, H. Han, X. Li, S. Shan, and G. Zhao, “Video-based remote physiological measurement via cross-verified feature disentangling.” in European Conference on Computer Vision (ECCV), 2020.
- L. Kong, Y. Zhao, L. Dong, Y. Jian, X. Jin, B. Li, Y. Feng, M. Liu, X. Liu, and H. Wu, “Non-contact detection of oxygen saturation based on visible light imaging device using ambient light,” Optics express, vol. 21, no. 15, pp. 17 464–17 471, 2013.
- U. Bal, “Non-contact estimation of heart rate and oxygen saturation using ambient light,” Biomedical optics express, vol. 6, no. 1, pp. 86–97, 2015.
- M. Hu, X. Wu, X. Wang, Y. Xing, N. An, and P. Shi, “Contactless blood oxygen estimation from face videos: A multi-model fusion method based on deep learning,” Biomedical Signal Processing and Control, vol. 81, p. 104487, 2023.
- Y. Akamatsu, Y. Onishi, and H. Imaoka, “Blood oxygen saturation estimation from facial video via dc and ac components of spatio-temporal map,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1–5.
- R. Caruana, “Multitask learning,” Machine learning, vol. 28, pp. 41–75, 1997.
- I. Misra, A. Shrivastava, A. Gupta, and M. Hebert, “Cross-stitch networks for multi-task learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3994–4003.
- S. Liu, E. Johns, and A. J. Davison, “End-to-end multi-task learning with attention,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 1871–1880.
- Y. Yang, P.-T. Jiang, Q. Hou, H. Zhang, J. Chen, and B. Li, “Multi-task dense prediction via mixture of low-rank experts,” arXiv preprint arXiv:2403.17749, 2024.
- N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017.
- H. Lu, X. Niu, J. Wang, Y. Wang, Q. Hu, J. Tang, Y. Zhang, K. Yuan, B. Huang, Z. Yu et al., “Gpt as psychologist? preliminary evaluations for gpt-4v on visual affective computing,” 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) workshop, 2024.
- J. Ma, Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi, “Modeling task relationships in multi-task learning with multi-gate mixture-of-experts,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1930–1939.
- Q. Liu, X. Wu, X. Zhao, Y. Zhu, D. Xu, F. Tian, and Y. Zheng, “Moelora: An moe-based parameter efficient fine-tuning method for multi-task medical applications,” arXiv preprint arXiv:2310.18339, 2023.
- Z. Chen, Z. Wang, Z. Wang, H. Liu, Z. Yin, S. Liu, L. Sheng, W. Ouyang, Y. Qiao, and J. Shao, “Octavius: Mitigating task interference in mllms via moe,” arXiv preprint arXiv:2311.02684, 2023.
- R. Yousefi, M. Nourani, S. Ostadabbas, and I. Panahi, “A motion-tolerant adaptive algorithm for wearable photoplethysmographic biosensors,” IEEE journal of biomedical and health informatics, vol. 18, no. 2, pp. 670–681, 2013.
- J. Du, S.-Q. Liu, B. Zhang, and P. C. Yuen, “Dual-bridging with adversarial noise generation for domain adaptive rppg estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 10 355–10 364.
- S. Shankar, V. Piratla, S. Chakrabarti, S. Chaudhuri, P. Jyothi, and S. Sarawagi, “Generalizing across domains via cross-gradient training,” arXiv preprint arXiv:1804.10745, 2018.
- Y. Shi, X. Yu, K. Sohn, M. Chandraker, and A. K. Jain, “Towards universal representation learning for deep face recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6817–6826.
- Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in International conference on machine learning. PMLR, 2015, pp. 1180–1189.
- B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, 2016.
- W. Jiyao, L. Hao, H. Hu, C. Yingcong, H. Dengbo, and W. Kaishun, “Generalizable remote physiological measurement via semantic-sheltered alignment and plausible style randomization,” in progress, 2024.
- C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International conference on machine learning. PMLR, 2017, pp. 1126–1135.
- D. Li, Y. Yang, Y.-Z. Song, and T. Hospedales, “Learning to generalize: Meta-learning for domain generalization,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
- F. Lv, J. Liang, S. Li, B. Zang, C. H. Liu, Z. Wang, and D. Liu, “Causality inspired representation learning for domain generalization,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 8046–8056.
- P. D. Mannheimer, “The light–tissue interaction of pulse oximetry,” Anesthesia & Analgesia, vol. 105, no. 6, pp. S10–S17, 2007.
- Z. Zhong, Z. Tang, T. He, H. Fang, and C. Yuan, “Convolution meets lora: Parameter efficient finetuning for segment anything model,” in The Twelfth International Conference on Learning Representations, 2024.
- J. Li, Z. Yu, and J. Shi, “Learning motion-robust remote photoplethysmography through arbitrary resolution videos,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, 2023, pp. 1334–1342.
- C. Á. Casado, M. L. Cañellas, and M. B. López, “Depression recognition using remote photoplethysmography from facial videos,” IEEE Transactions on Affective Computing, 2023.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2020.
- V. Shah, N. Ruiz, F. Cole, E. Lu, S. Lazebnik, Y. Li, and V. Jampani, “Ziplora: Any subject in any style by effectively merging loras,” arXiv preprint arXiv:2311.13600, 2023.
- N. Bansal, X. Chen, and Z. Wang, “Can we gain more from orthogonality regularizations in training deep networks?” Advances in Neural Information Processing Systems, vol. 31, 2018.
- A. Lam and Y. Kuno, “Robust heart rate measurement from video using select random patches,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 3640–3648.
- M. Kumar, A. Veeraraghavan, and A. Sabharwal, “Distanceppg: Robust non-contact vital signs monitoring using a camera,” Biomedical optics express, vol. 6, no. 5, pp. 1565–1588, 2015.
- Z. Sun and X. Li, “Contrast-phys: Unsupervised video-based remote physiological measurement via spatiotemporal contrast,” in European Conference on Computer Vision. Springer, 2022, pp. 492–510.
- T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning. PMLR, 2020, pp. 1597–1607.
- C.-H. Cheng, Z. Yuen, S. Chen, K.-L. Wong, J.-W. Chin, T.-T. Chan, and R. H. So, “Contactless blood oxygen saturation estimation from facial videos using deep learning,” Bioengineering, vol. 11, no. 3, p. 251, 2024.
- E. Suprayitno, M. Marlianto, and M. Mauliana, “Measurement device for detecting oxygen saturation in blood, heart rate, and temperature of human body,” in Journal of Physics: Conference Series, vol. 1402, no. 3. IOP Publishing, 2019, p. 033110.
- H. M. Stauss, “Heart rate variability,” American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, vol. 285, no. 5, pp. R927–R931, 2003.
- A. Bernardes, R. Couceiro, J. Medeiros, J. Henriques, C. Teixeira, M. Simões, J. Durães, R. Barbosa, H. Madeira, and P. Carvalho, “How reliable are ultra-short-term hrv measurements during cognitively demanding tasks?” Sensors, vol. 22, no. 17, p. 6528, 2022.
- H. Lu, H. Han, and S. K. Zhou, “Dual-gan: Joint bvp and noise modeling for remote physiological measurement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 404–12 413.
- X. Liu, B. Hill, Z. Jiang, S. Patel, and D. McDuff, “Efficientphys: Enabling simple, fast and accurate camera-based cardiac measurement,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2023, pp. 5008–5017.
- D. Krueger, E. Caballero, J.-H. Jacobsen, A. Zhang, J. Binas, D. Zhang, R. Le Priol, and A. Courville, “Out-of-distribution generalization via risk extrapolation (rex),” in International Conference on Machine Learning. PMLR, 2021, pp. 5815–5826.
- G. Parascandolo, A. Neitz, A. Orvieto, L. Gresele, and B. Schölkopf, “Learning explanations that are hard to vary,” arXiv preprint arXiv:2009.00329, 2020.
- B. Wu, C. Xu, X. Dai, A. Wan, P. Zhang, Z. Yan, M. Tomizuka, J. Gonzalez, K. Keutzer, and P. Vajda, “Visual transformers: Token-based image representation and processing for computer vision,” 2020.
- H. Wang, N. Jiang, T. Pan, H. Si, Y. Li, and W. Zou, “Cognitive load identification of pilots based on physiological-psychological characteristics in complex environments,” Journal of Advanced Transportation, vol. 2020, pp. 1–16, 2020.
- S. Basu, A. Bag, M. Aftabuddin, M. Mahadevappa, J. Mukherjee, and R. Guha, “Effects of emotion on physiological signals,” in 2016 IEEE Annual India Conference (INDICON). IEEE, 2016, pp. 1–6.
- R. M. Sabour, Y. Benezeth, P. De Oliveira, J. Chappe, and F. Yang, “Ubfc-phys: A multimodal database for psychophysiological studies of social stress,” IEEE Transactions on Affective Computing, vol. 14, no. 1, pp. 622–636, 2021.
- X. Liu, Y. Zhang, Z. Yu, H. Lu, H. Yue, and J. Yang, “rppg-mae: Self-supervised pretraining with masked autoencoders for remote physiological measurements,” IEEE Transactions on Multimedia, 2024.
- L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.