Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Compression and Acceleration of Neural Networks for Communications (1907.13269v1)

Published 31 Jul 2019 in cs.IT, eess.SP, and math.IT

Abstract: Deep learning (DL) has achieved great success in signal processing and communications and has become a promising technology for future wireless communications. Existing works mainly focus on exploiting DL to improve the performance of communication systems. However, the high memory requirement and computational complexity constitute a major hurdle for the practical deployment of DL-based communications. In this article, we investigate how to compress and accelerate the neural networks (NNs) in communication systems. After introducing the deployment challenges for DL-based communication algorithms, we discuss some representative NN compression and acceleration techniques. Afterwards, two case studies for multiple-input-multiple-output (MIMO) communications, including DL-based channel state information feedback and signal detection, are presented to show the feasibility and potential of these techniques. We finally identify some challenges on NN compression and acceleration in DL-based communications and provide a guideline for subsequent research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiajia Guo (45 papers)
  2. Jinghe Wang (7 papers)
  3. Chao-Kai Wen (145 papers)
  4. Shi Jin (489 papers)
  5. Geoffrey Ye Li (198 papers)
Citations (56)

Summary

We haven't generated a summary for this paper yet.