Edge-Efficient Deep Learning Models for Automatic Modulation Classification: A Performance Analysis (2404.15343v1)
Abstract: The recent advancement in deep learning (DL) for automatic modulation classification (AMC) of wireless signals has encouraged numerous possible applications on resource-constrained edge devices. However, developing optimized DL models suitable for edge applications of wireless communications is yet to be studied in depth. In this work, we perform a thorough investigation of optimized convolutional neural networks (CNNs) developed for AMC using the three most commonly used model optimization techniques: a) pruning, b) quantization, and c) knowledge distillation. Furthermore, we have proposed optimized models with the combinations of these techniques to fuse the complementary optimization benefits. The performances of all the proposed methods are evaluated in terms of sparsity, storage compression for network parameters, and the effect on classification accuracy with a reduction in parameters. The experimental results show that the proposed individual and combined optimization techniques are highly effective for developing models with significantly less complexity while maintaining or even improving classification performance compared to the benchmark CNNs.
- E. E. Azzouz and A. K. Nandi, “Automatic modulation recognition of communication signals,” IEEE Trans. Commun., vol. 334, no. 4, pp. 431–436, 1998.
- O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Survey of automatic modulation classification techniques: Classical approaches and new trends,” IET commun., vol. 1, no. 2, pp. 137–156, Apr. 2007.
- T. J. O’Shea, J. Corgan, and T. C. Clancy, “Convolutional radio modulation recognition networks,” in Proc. Int. Conf. Eng. Appl. Neural Netw. (EANN), Aberdeen, UK, Sep. 2–5, 2016, pp. 213–226.
- N. E. West and T. O’shea, “Deep architectures for modulation recognition,” in Proc. IEEE Int. Symp. Dyn. Spectr. Access Netw. (DySPAN), Baltimore, MD, USA, Mar. 6–9, 2017, pp. 1–6.
- X. Liu, D. Yang, and A. El Gamal, “Deep neural network architectures for modulation classification,” in Proc. Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, USA, Oct. 29–Nov. 1, 2017, pp. 915–919.
- Z. Zhang, H. Luo, C. Wang, C. Gan, and Y. Xiang, “Automatic modulation classification using CNN-LSTM based dual-stream structure,” IEEE Trans. Veh. Tech., vol. 69, no. 11, pp. 13 521–13 531, Nov. 2020.
- R. Mishra, H. P. Gupta, and T. Dutta, “A survey on deep neural network compression: Challenges, overview, and solutions,” Oct. 2020. [Online]. Available: https://arxiv.org/abs/2010.03954
- Y. Cheng, D. Wang, P. Zhou, and T. Zhang, “A survey of model compression and acceleration for deep neural networks,” Oct. 2017. [Online]. Available: https://arxiv.org/abs/1710.09282
- Y. Shi, K. Yang, T. Jiang, J. Zhang, and K. B. Letaief, “Communication-efficient edge AI: Algorithms and systems,” IEEE Commun. Surveys & Tuts., vol. 22, no. 4, pp. 2167–2191, Jul. 2020.
- A. Aghasi, A. Abdi, N. Nguyen, and J. Romberg, “Net-trim: Convex pruning of deep neural networks with performance guarantee,” in Proc. Adv. Neural. Inf. Process. Syst. (NIPS), Long Beach, CA, USA, Dec. 4–7, 2017, pp. 3180–3189.
- M. Denil, B. Shakibi, L. Dinh, M. A. Ranzato, and N. De Freitas, “Predicting parameters in deep learning,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), Lake Tahoe, NV, USA, Dec. 5–10, 2013, pp. 2148–2156.
- H. Jegou, M. Douze, and C. Schmid, “Product quantization for nearest neighbor search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, pp. 1–15, Jan. 2010.
- G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” Mar. 2015. [Online]. Available: https://arxiv.org/abs/1503.02531
- T. O’Shea and N. West, “Radio machine learning dataset generation with GNU Radio,” in Proc. GNU Radio Conf., Boulder, CO, USA, Sep. 2016.
- J. Yu et al., “Scalpel: Customizing dnn pruning to the underlying hardware parallelism,” in Proc. IEEE/ACM 44th Annu. Int. Symp. Comput. Architecture (ISCA), Toronto, ON, Canada, Jun. 24–28, 2017, pp. 548–560.
- J. F. Zhang et al., “Snap: An efficient sparse neural acceleration processor for unstructured sparse deep neural network inference,” IEEE J. Solid-State Circuits, vol. 56, no. 2, pp. 636–647, Feb. 2021.