Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sequential Convolutional Recurrent Neural Networks for Fast Automatic Modulation Classification (1909.03050v2)

Published 9 Sep 2019 in eess.SP and cs.LG

Abstract: A novel and efficient end-to-end learning model for automatic modulation classification is proposed for wireless spectrum monitoring applications, which automatically learns from the time domain in-phase and quadrature data without requiring the design of hand-crafted expert features. With the intuition of convolutional layers with pooling serving as the role of front-end feature distillation and dimensionality reduction, sequential convolutional recurrent neural networks are developed to take complementary advantage of parallel computing capability of convolutional neural networks and temporal sensitivity of recurrent neural networks. Experimental results demonstrate that the proposed architecture delivers overall superior performance in signal to noise ratio range above -10~dB, and achieves significantly improved classification accuracy from 80\% to 92.1\% at high signal to noise ratio range, while drastically reduces the average training and prediction time by approximately 74% and 67%, respectively. Response patterns learned by the proposed architecture are visualized to better understand the physics of the model. Furthermore, a comparative study is performed to investigate the impacts of various sequential convolutional recurrent neural network structure settings on classification performance. A representative sequential convolutional recurrent neural network architecture with the two-layer convolutional neural network and subsequent two-layer long short-term memory neural network is developed to suggest the option for fast automatic modulation classification.

Citations (43)

Summary

We haven't generated a summary for this paper yet.