Deep-Learned Compression for Radio-Frequency Signal Classification (2403.03150v1)
Abstract: Next-generation cellular concepts rely on the processing of large quantities of radio-frequency (RF) samples. This includes Radio Access Networks (RAN) connecting the cellular front-end based on software defined radios (SDRs) and a framework for the AI processing of spectrum-related data. The RF data collected by the dense RAN radio units and spectrum sensors may need to be jointly processed for intelligent decision making. Moving large amounts of data to AI agents may result in significant bandwidth and latency costs. We propose a deep learned compression (DLC) model, HQARF, based on learned vector quantization (VQ), to compress the complex-valued samples of RF signals comprised of 6 modulation classes. We are assessing the effects of HQARF on the performance of an AI model trained to infer the modulation class of the RF signal. Compression of narrow-band RF samples for the training and off-the-site inference will allow for an efficient use of the bandwidth and storage for non-real-time analytics, and for a decreased delay in real-time applications. While exploring the effectiveness of the HQARF signal reconstructions in modulation classification tasks, we highlight the DLC optimization space and some open problems related to the training of the VQ embedded in HQARF.
- C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. Journal, vol. 27, 1948.
- F. Codevilla, J.-G. Simard, R. Goroshin, and C. Pal, “Learned image compression for machine perception,” ArXiv, vol. abs/2111.02249, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:241033392
- M. Williams, S. Kokalj-Filipovic, and A. Rodriguez, “Analysis of lossy generative data compression for robust remote deep inference,” in ACM Workshop on Wireless Security and Machine Learning (WiseML), 2023.
- L. Bonati, S. D’Oro, M. Polese, S. Basagni, and T. Melodia, “Intelligence and Learning in O-RAN for Data-driven NextG Cellular Networks,” vol. 59, no. 10, 2021.
- S. Peng, S. Sun, and Y.-D. Yao, “A Survey of Modulation Classification Using Deep Learning: Signal Representation and Data Preprocessing,” IEEE trans. on neural networks and learning systems, vol. 33, no. 12, 2022.
- C. Jia, Z. Liu, Y. Wang, S. Ma, and W. Gao, “Layered Image Compression Using Scalable AutoEncoder,” in IEEE Conf. on Multimedia Inform. Processing and Retrieval (MIPR), 2019.
- D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” ArXiv, vol. abs/1312.6114, 2013.
- Y. Hu, W. Yang, Z. Ma, and J. Liu, “Learning End-to-End Lossy Image Compression: A Benchmark,” vol. 44, no. 8, 2022.
- A. V. den Oord, O. Vinyals, and K. Kavukcuoglu, “Neural Discrete Representation Learning,” in 31st Intern. Conf. on Neural Information Processing Systems, 2017.
- T. Kohonen, “LVQ-.PAK Version 3.1,” 1995, [LVQ Programming Team of the Helsinki University of Technology].
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1412.6572
- S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1412.5068
- R. Gray and D. Neuhoff, “Quantization,” IEEE Trans. on Information Theory, vol. 44, no. 6, 1998.
- L. Boegner, M. Gulati, G. Vanhoy, P. Vallance, B. Comar, S. Kokalj-Filipovic, C. Lennon, and R. D. Miller, “Large Scale Radio Frequency Signal Classification,” 2022. [Online]. Available: https://arxiv.org/abs/2207.09918
- D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” Found. Trends Mach. Learn., vol. 12, no. 4, 2019.
- S. A. et al., “Learning from Hypervectors: A Survey on Hypervector Encoding.”
- Y. K. S.Kokalj-Filipovic, A. Rodriguez, “HQARF code,” https://github.com/skokalj/vq_hae_1D/tree/yagna, 2024.
- L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Mach. Learning Research, vol. 9, no. 86, 2008.
- M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Intern. Conference on Machine Learning, (ICML), 2019.