Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep-Learned Compression for Radio-Frequency Signal Classification (2403.03150v1)

Published 5 Mar 2024 in cs.LG, cs.NI, and eess.SP

Abstract: Next-generation cellular concepts rely on the processing of large quantities of radio-frequency (RF) samples. This includes Radio Access Networks (RAN) connecting the cellular front-end based on software defined radios (SDRs) and a framework for the AI processing of spectrum-related data. The RF data collected by the dense RAN radio units and spectrum sensors may need to be jointly processed for intelligent decision making. Moving large amounts of data to AI agents may result in significant bandwidth and latency costs. We propose a deep learned compression (DLC) model, HQARF, based on learned vector quantization (VQ), to compress the complex-valued samples of RF signals comprised of 6 modulation classes. We are assessing the effects of HQARF on the performance of an AI model trained to infer the modulation class of the RF signal. Compression of narrow-band RF samples for the training and off-the-site inference will allow for an efficient use of the bandwidth and storage for non-real-time analytics, and for a decreased delay in real-time applications. While exploring the effectiveness of the HQARF signal reconstructions in modulation classification tasks, we highlight the DLC optimization space and some open problems related to the training of the VQ embedded in HQARF.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. Journal, vol. 27, 1948.
  2. F. Codevilla, J.-G. Simard, R. Goroshin, and C. Pal, “Learned image compression for machine perception,” ArXiv, vol. abs/2111.02249, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:241033392
  3. M. Williams, S. Kokalj-Filipovic, and A. Rodriguez, “Analysis of lossy generative data compression for robust remote deep inference,” in ACM Workshop on Wireless Security and Machine Learning (WiseML), 2023.
  4. L. Bonati, S. D’Oro, M. Polese, S. Basagni, and T. Melodia, “Intelligence and Learning in O-RAN for Data-driven NextG Cellular Networks,” vol. 59, no. 10, 2021.
  5. S. Peng, S. Sun, and Y.-D. Yao, “A Survey of Modulation Classification Using Deep Learning: Signal Representation and Data Preprocessing,” IEEE trans. on neural networks and learning systems, vol. 33, no. 12, 2022.
  6. C. Jia, Z. Liu, Y. Wang, S. Ma, and W. Gao, “Layered Image Compression Using Scalable AutoEncoder,” in IEEE Conf. on Multimedia Inform. Processing and Retrieval (MIPR), 2019.
  7. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” ArXiv, vol. abs/1312.6114, 2013.
  8. Y. Hu, W. Yang, Z. Ma, and J. Liu, “Learning End-to-End Lossy Image Compression: A Benchmark,” vol. 44, no. 8, 2022.
  9. A. V. den Oord, O. Vinyals, and K. Kavukcuoglu, “Neural Discrete Representation Learning,” in 31st Intern. Conf. on Neural Information Processing Systems, 2017.
  10. T. Kohonen, “LVQ-.PAK Version 3.1,” 1995, [LVQ Programming Team of the Helsinki University of Technology].
  11. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1412.6572
  12. S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” arXiv, 2015. [Online]. Available: https://arxiv.org/abs/1412.5068
  13. R. Gray and D. Neuhoff, “Quantization,” IEEE Trans. on Information Theory, vol. 44, no. 6, 1998.
  14. L. Boegner, M. Gulati, G. Vanhoy, P. Vallance, B. Comar, S. Kokalj-Filipovic, C. Lennon, and R. D. Miller, “Large Scale Radio Frequency Signal Classification,” 2022. [Online]. Available: https://arxiv.org/abs/2207.09918
  15. D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” Found. Trends Mach. Learn., vol. 12, no. 4, 2019.
  16. S. A. et al., “Learning from Hypervectors: A Survey on Hypervector Encoding.”
  17. Y. K. S.Kokalj-Filipovic, A. Rodriguez, “HQARF code,” https://github.com/skokalj/vq_hae_1D/tree/yagna, 2024.
  18. L. van der Maaten and G. Hinton, “Visualizing data using t-sne,” Journal of Mach. Learning Research, vol. 9, no. 86, 2008.
  19. M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” in Intern. Conference on Machine Learning, (ICML), 2019.
Citations (1)

Summary

We haven't generated a summary for this paper yet.