Papers
Topics
Authors
Recent
2000 character limit reached

Wireless Multimodal Foundation Model (WMFM): Integrating Vision and Communication Modalities for 6G ISAC Systems

Published 29 Dec 2025 in cs.NI | (2512.23897v1)

Abstract: The emergence of multimodal foundation models has revolutionized learning paradigms by enabling joint understanding across diverse data types. In the context of next-generation wireless networks, integrating sensing and communication modalities presents a unique opportunity to develop generalizable and data-efficient models. In this work, we introduce the contrastive learning based Wireless Multimodal Foundation Model (WMFM), a large-scale framework that jointly learns from wireless channel coefficients and visual imagery. The WMFM is pretrained using contrastive learning, a self-supervised learning technique that aligns embeddings of camera and channel data without requiring explicit labels. The pretrained encoders are then frozen and employed as feature extractors, with lightweight task-specific heads, fine-tuned for downstream tasks, including user localization and LoS/nLoS classification. Extensive experiments on the DeepVerse6G dataset demonstrate that the proposed WMFM achieves a 17% improvement in balanced accuracy for LoS/nLoS classification and a 48.5% reduction in localization error compared to the end-to-end (E2E) benchmark, while reducing training time by up to 90-fold. Even when trained with as little as 20% of the data, the WMFM-based heads outperform the fully supervised E2E model, underscoring their robustness and data-efficient learning. The proposed approach establishes a foundation for scalable, multimodal learning in Integrated Sensing and Communication (ISAC) systems, paving the way for intelligent and adaptive 6G networks.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.