Papers
Topics
Authors
Recent
2000 character limit reached

Self-Supervised Multimodal Fusion Transformer for Passive Activity Recognition (2209.03765v1)

Published 15 Aug 2022 in eess.SP, cs.CV, cs.HC, and cs.LG

Abstract: The pervasiveness of Wi-Fi signals provides significant opportunities for human sensing and activity recognition in fields such as healthcare. The sensors most commonly used for passive Wi-Fi sensing are based on passive Wi-Fi radar (PWR) and channel state information (CSI) data, however current systems do not effectively exploit the information acquired through multiple sensors to recognise the different activities. In this paper, we explore new properties of the Transformer architecture for multimodal sensor fusion. We study different signal processing techniques to extract multiple image-based features from PWR and CSI data such as spectrograms, scalograms and Markov transition field (MTF). We first propose the Fusion Transformer, an attention-based model for multimodal and multi-sensor fusion. Experimental results show that our Fusion Transformer approach can achieve competitive results compared to a ResNet architecture but with much fewer resources. To further improve our model, we propose a simple and effective framework for multimodal and multi-sensor self-supervised learning (SSL). The self-supervised Fusion Transformer outperforms the baselines, achieving a F1-score of 95.9%. Finally, we show how this approach significantly outperforms the others when trained with as little as 1% (2 minutes) of labelled training data to 20% (40 minutes) of labelled training data.

Citations (8)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.