Latent Sensor Fusion: Multimedia Learning of Physiological Signals for Resource-Constrained Devices (2507.14185v1)
Abstract: Latent spaces offer an efficient and effective means of summarizing data while implicitly preserving meta-information through relational encoding. We leverage these meta-embeddings to develop a modality-agnostic, unified encoder. Our method employs sensor-latent fusion to analyze and correlate multimodal physiological signals. Using a compressed sensing approach with autoencoder-based latent space fusion, we address the computational challenges of biosignal analysis on resource-constrained devices. Experimental results show that our unified encoder is significantly faster, lighter, and more scalable than modality-specific alternatives, without compromising representational accuracy.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.