Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wideband and Entropy-Aware Deep Soft Bit Quantization (2110.09541v1)

Published 18 Oct 2021 in eess.SP, cs.IT, cs.LG, and math.IT

Abstract: Deep learning has been recently applied to physical layer processing in digital communication systems in order to improve end-to-end performance. In this work, we introduce a novel deep learning solution for soft bit quantization across wideband channels. Our method is trained end-to-end with quantization- and entropy-aware augmentations to the loss function and is used at inference in conjunction with source coding to achieve near-optimal compression gains over wideband channels. To efficiently train our method, we prove and verify that a fixed feature space quantization scheme is sufficient for efficient learning. When tested on channel distributions never seen during training, the proposed method achieves a compression gain of up to $10 \%$ in the high SNR regime versus previous state-of-the-art methods. To encourage reproducible research, our implementation is publicly available at https://github.com/utcsilab/wideband-llr-deep.

Summary

We haven't generated a summary for this paper yet.