Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Input-distribution-aware parallel decoding of block codes (2105.06581v2)

Published 13 May 2021 in cs.IT and math.IT

Abstract: Many channel decoders rely on parallel decoding attempts to achieve good performance with acceptable latency. However, most of the time fewer attempts than the foreseen maximum are sufficient for successful decoding. Input-distribution-aware (IDA) decoding allows to determine the parallelism of polar code list decoders by observing the distribution of channel information. In this work, IDA decoding is proven to be effective with different codes and decoding algorithms as well. Two techniques, M-IDA and MD-IDA, are proposed: they exploit the sampling of the input distribution inherent to particular decoding algorithms to perform low-cost IDA decoding. Simulation results on the decoding of BCH codes via the Chase and ORBGRAND algorithms show that they perform at least as well as the original IDA decoding, allowing to reduce run-time complexity down to 17% and 67\% with minimal error correction degradation.

Citations (1)

Summary

We haven't generated a summary for this paper yet.