Online Neural Model Fine-Tuning in Massive MIMO CSI Feedback: Taming The Communication Cost of Model Updates (2501.18250v2)
Abstract: Efficient channel state information (CSI) compression is essential in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems due to the significant feedback overhead. Recently, deep learning-based compression techniques have demonstrated superior performance across various data types, including CSI. However, these methods often suffer from performance degradation when the data distribution shifts, primarily due to limited generalization capabilities. To address this challenge, we propose an online model fine-tuning approach for CSI feedback in massive MIMO systems. We consider full-model fine-tuning, where both the encoder and decoder are jointly updated using recent CSI samples. A key challenge in this setup is the transmission of updated decoder parameters, which introduces additional feedback overhead. To mitigate this bottleneck, we incorporate the bit-rate of model updates into the fine-tuning objective and entropy code the updates jointly with the compressed CSI. To reduce the bit-rate, we design an efficient prior distribution that encourages the network to update only the most significant weights, thereby minimizing the overall model update cost. Our results show that full-model fine-tuning significantly enhances the rate-distortion (RD) performance of neural CSI compression despite the additional communication cost of model updates. Moreover, we investigate the impact of update frequency in dynamic wireless environments and identify an optimal fine-tuning interval that achieves the best RD trade-off.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.