Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantized Polar Code Decoders: Analysis and Design (1902.10395v1)

Published 27 Feb 2019 in cs.IT and math.IT

Abstract: Applications of massive machine-type communications, such as sensor networks, smart metering, 'internet-of-things', or process and factory automation, are forecast to have great economic impact in the next five to ten years. Low-complexity energy- and cost-efficient communication schemes are essential to enable large-scale deployments. To target these requirements, we study decoding of polar codes with coarse quantization in the short block length regime. In particular, we devise schemes to mitigate the impact of coarse quantization, which has not been adequately explored in prior works. We introduce the 3-level quantized successive cancellation (SC) and SC list (SCL) decoder. Coarse quantization of log-likelihood ratios (LLRs) leads to quantization of path metrics (PMs). Quantized PMs severely impact the list management of SCL decoders, and hence cause performance degradation. Two mitigation strategies are presented: 1.) Selecting the winning codeword from the decoder's list based on maximum-likelihood (ML) rather than PM. 2.) Utilizing statistical knowledge about the reliability of bit estimates in each decoding step to improve list management. We demonstrate the effectiveness of our techniques in simulations. In particular, our enhancements prove useful in the low code rate regime, where theory available in the literature predicts pronounced losses caused by quantization. Furthermore, we put our work into perspective by comparing it to finer quantization and partially unquantized schemes. This yields insights and recommendations as to which quantization schemes offer the best cost-benefit ratio for practical implementation.

Citations (5)

Summary

We haven't generated a summary for this paper yet.