Design Self-Supervised Pre-Training Strategies for Wireless Foundation Models Across Diverse Downstream Tasks
Develop self-supervised pre-training objectives and procedures—such as next-sample prediction, masking, and denoising—for a physical-layer wireless foundation model that can effectively support multiple downstream wireless tasks with heterogeneous data representations, enabling efficient fine-tuning with minimal additional data across varied telecom applications.
References
Nevertheless, designing effective pre-training strategies for multiple wireless downstream tasks with different data representations remains an open research question.
— Large-Scale AI in Telecom: Charting the Roadmap for Innovation, Scalability, and Enhanced Digital Experiences
(2503.04184 - Shahid et al., 6 Mar 2025) in Section 13.1.13, LTM pre-training of a physical-layer foundation model