Lightweight Foundation Model for Wireless Time Series Downstream Tasks on Edge Devices (2511.14895v1)
Abstract: While machine learning is widely used to optimize wireless networks, training a separate model for each task in communication and localization is becoming increasingly unsustainable due to the significant costs associated with training and deployment. Foundation models offer a more scalable alternative by enabling a single model to be adapted across multiple tasks through fine-tuning with limited samples. However, current foundation models mostly rely on large-scale Transformer architectures, resulting in computationally intensive models unsuitable for deployment on typical edge devices. This paper presents a lightweight foundation model based on simple Multi-Layer-Perceptron (MLP) encoders that independently process input patches. Our model supports 4 types of downstream tasks (long-range technology recognition, short-range technology recognition, modulation recognition and line-of-sight-detection) from multiple input types (IQ and CIR) and different sampling rates. We show that, unlike Transformers, which can exhibit performance drops as downstream tasks are added, our MLP model maintains robust generalization performance, achieving over 97% accurate fine-tuning results for previously unseen data classes. These results are achieved despite having only 21K trainable parameters, allowing an inference time of 0.33 ms on common edge devices, making the model suitable for constrained real-time deployments.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.