Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Denoising Diffusion Probabilistic Models for Hardware-Impaired Communications (2309.08568v2)

Published 15 Sep 2023 in cs.IT, eess.SP, and math.IT

Abstract: Generative AI has received significant attention among a spectrum of diverse industrial and academic domains, thanks to the magnificent results achieved from deep generative models such as generative pre-trained transformers (GPT) and diffusion models. In this paper, we explore the applications of denoising diffusion probabilistic models (DDPMs) in wireless communication systems under practical assumptions such as hardware impairments (HWI), low-SNR regime, and quantization error. Diffusion models are a new class of state-of-the-art generative models that have already showcased notable success with some of the popular examples by OpenAI1 and Google Brain2. The intuition behind DDPM is to decompose the data generation process over small ``denoising'' steps. Inspired by this, we propose using denoising diffusion model-based receiver for a practical wireless communication scheme, while providing network resilience in low-SNR regimes, non-Gaussian noise, different HWI levels, and quantization error. We evaluate the reconstruction performance of our scheme in terms of mean-squared error (MSE) metric. Our results show that more than 25 dB improvement in MSE is achieved compared to deep neural network (DNN)-based receivers. We also highlight robust out-of-distribution performance under non-Gaussian noise.

Summary

We haven't generated a summary for this paper yet.