Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays (2307.03327v1)
Abstract: This work presents the first applications of self-supervised learning applied to data from digital antenna arrays. Encoder-decoder networks are pretrained on digital array data to perform a self-supervised noisy-reconstruction task called channel in-painting, in which the network infers the contents of array data that has been masked with zeros. The self-supervised step requires no human-labeled data. The encoder architecture and weights from pretraining are then transferred to a new network with a task-specific decoder, and the new network is trained on a small volume of labeled data. We show that pretraining on the unlabeled data allows the new network to perform the task of bandwidth regression on the digital array data better than an equivalent network that is trained on the same labeled data from random initialization.
- K. E. Kolodziej, G. A. Brigham, M. A. Harger, B. A. Janice, A. I. Sands, I. Weiner, P.-F. W. Wolfe, J. P. Doane, and B. T. Perry, “Scalable Array Technologies for Converged-RF Applications,” in 2022 IEEE Radar Conference (RadarConf22), pp. 1–5, 2022.
- L. J. Wong and A. J. Michaels, “Transfer Learning for Radio Frequency Machine Learning: A Taxonomy and Survey,” Sensors, vol. 22, no. 4, 2022.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), vol. 25, Curran Associates, Inc., 2012.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
- I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Networks,” 2014.
- A. v. d. Oord, Y. Li, and O. Vinyals, “Representation Learning with Contrastive Predictive Coding,” 2018.
- A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “WaveNet: A Generative Model for Raw Audio,” 2016.
- A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Y. Ng, “Deep Speech: Scaling up end-to-end speech recognition,” 1412.
- Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, “Language Modeling with Gated Convolutional Networks,” 2016.
- T. J. O’Shea, J. Corgan, and T. C. Clancy, “Convolutional Radio Modulation Recognition Networks,” 2016.
- J. Krzyston, R. Bhattacharjea, and A. Stark, “Complex-Valued Convolutions for Modulation Recognition using Deep Learning,” in 2020 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1–6, 2020.
- T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” 2013.
- R. Zhang, P. Isola, and A. A. Efros, “Colorful Image Colorization,” 2016.
- C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised Visual Representation Learning by Context Prediction,” 2015.
- G. Synnaeve and E. Dupoux, “A Temporal Coherence Loss Function for Learning Unsupervised Acoustic Embeddings,” Procedia Computer Science, vol. 81, pp. 95–100, 2016. SLTU-2016 5th Workshop on Spoken Language Technologies for Under-resourced languages 09-12 May 2016 Yogyakarta, Indonesia.
- M. Alloulah, A. D. Singh, and M. Arnold, “Self-Supervised Radio-Visual Representation Learning for 6G Sensing,” 2021.
- T. Li, L. Fan, M. Zhao, Y. Liu, and D. Katabi, “Making the Invisible Visible: Action Recognition Through Walls and Occlusions,” 2019.
- J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-Excitation Networks,” 2017.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.